As Robb Watts' answer states, Facebook has acknowledged this was part of the problem, so we know the claim is true. ("...it took extra time to activate the secure access protocols needed to get people onsite and able to work on the servers.") Personal communication by unnamed sources to a credentialed tech reporter specifically made the claim that card access was down, although Facebook isn't giving us that level of detail.
That answer is the only one needed to address the immediate claim.
This answer looks at some proposed mechanisms for why this might have been the case, given the larger context of the outage. (Consider it a supplement--if the original question was "Why did JFK die?" and the strictly correct answer is "He was shot," this answer is explaining how that results in death.)
As of this writing, Facebook has not given more detail, however many different social media posts have explored mechanisms for how a networking problem would impede building access--namely, that the electronic systems authorizing access were also caught up in the outage.
Outside parties like CloudFlare, a major network facilitation company unrelated to Facebook, originally became aware of the issue through missing DNS records. DNS is the lookup system that converts between memorable resource names--like websites--and the actual numeric addresses that are currently providing the resource. Early speculation suggested that with DNS down, Facebook also could not access its own systems, including the LDAP server directory system that would track which employees are allowed to access which facilities.
However, the Facebook writeup of the outage indicates that the order of events was a little different. A routine maintenance operation (gone awry) accidentally turned off the main internal networking connections ("backbone") between Facebook data centers. As a result, none of Facebook's internal systems could communicate. Facebook's internal DNS servers--the machines that tell traffic how to get to Facebook--also lost connectivity to the data centers. Now, those systems are designed to function only if they think they can provide reliable data: if they lose connection to the actual Facebook servers, they can't do their job of telling others where to find Facebook resources. So they tell the whole Internet to stop asking them, using something called the Border Gateway Protocol, or BGP (a system which helps networking machines map the best ways to send traffic back and forth).
Essentially, at that point, Facebook's DNS servers all called in sick at once, and nobody could find Facebook any more. But this wasn't strictly a DNS, or even strictly a BGP, problem, as careful observers realized soon after (though the BGP-to-DNS issue caused splash damage to the whole Internet in the form of elevated DNS traffic). Connections between Facebook services' load balancers (that direct traffic from outside to specific locations inside Facebook networks) and the broader Internet still worked in some cases. The root cause was that Facebook had nuked its own internal networking.
Regardless of the exact mechanism, the impact on physical access would be a breakdown of communication between the door lock readers--which get an ID code from an employee's badge--and the directory system that confirms which employee IDs are supposed to have access to which facility. I had originally stated this was due to the DNS problem (meaning that the door readers could no longer find the location of the LDAP server) but best practice is to make directory servers accessible only on private (or virtual private) networks, not the Internet (see also here and probably more other references than I have time to track down). It's more likely that the directory server that grants access was connected through the same internal backbone connection that went down to begin with.
In any event, there's a physical override for this, with an old-fashioned key. But you don't issue a copy of that key to everybody with access to the building--they might make copies, you'd have to get them back when their roles changed, etc. etc. Instead, there's a small security team with overrides for physical access. However, to the extent that the engineering team uses Facebook internal products (e.g. Messenger) for communication, those would also have been impaired by the outage; and there would have been delays in finding other contact information due to the directory being unreachable.
Again, this is a reconstruction of the mechanism through which physical access would have occurred. We won't know for-sure-for-sure until and unless Facebook releases a more specific post-mortem, but my aim is to demonstrate the plausibility of the reported claims based on the surrounding circumstances.