To answer your questions at the end, it depends on the RDBMS architecture. For Ex. 1, the answer is almost always no. Reads don't block other reads. For Ex. 2, if the rows overlap in a block on disk, there may be contention. The intent to write typically won't block reads. It depends if your database uses optimistic or pessimistic locking. Most modern systems are optimistic and only lock briefly while the record is updated and won't prevent a subsequent read.
There are several types of database locks. These will depend on the internal RDBMS architecture, but I'll show a few MS SQL Server locks which is quite broad. They can be segregated by type and range:
Types:
Shared or read Lock: Access is shared. Indicates that a process it reading data. Allows concurrent reads.
Exclusive lock: Indicates that a process wants to write. Does not allow other reads or writes.
Range:
Table: locks the whole table.
Block: locks a physical (or logical) block on the disk
Row: locks an individual row.
These are the basic types and each database will have others. Reads can be done concurrently but writes need to be done sequentially.
A table truncation may cause an exclusive table lock, for example. A row update may lock a block from other reads and writes.
SQL Server also specified a user-defined lock. These can be used to lock across tables in triggers. These are a less-than-optimal solution. It also has intent locks (that others like Oracle don't have) that indicate that you intend to update the record, but may or may not update it. These can be a source of deadlocks if you are careless with triggers and stored procedures.
This article gives more information on SQL Server: https://technet.microsoft.com/en-us/library/aa213039(v=sql.80).aspx
Refer to your specific database documentation for how it handles locks.