Here's an example I found in my notes, I'm figuring out Transformers myself so I can't provide any additional insight other than that this code section seems to work, and as you can see the init method of the config class is apparently where "meta" goes. You can of course open up the Libra source in site-packages if you wanted to drill down and understand what that option does in the code.
Example:
import transformers
from transformers import pipeline
from transformers import AutoTokenizer
import torch
print("starting")
name = 'mosaicml/mpt-30b-instruct'
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.init_device = 'meta' # For fast initialization directly on GPU!
print("loading model")
model = transformers.AutoModelForCausalLM.from_pretrained(name, config=config, torch_dtype=torch.bfloat16, trust_remote_code=True)
print("model loaded")