Absolutely and not just simple scraping.
If you think about it using the browser itself is as close as possible to replicating a real user session. You don't have to care about manually setting cookies, discover and construct json http requests. The browser does it all for you. After a page has been rendered (with or without javascript) you can access the DOM and extract any content you like.
Take a look at https://github.com/get-set-fetch/extension, an open source browser extension that does more than just basic scraping. It supports infinite scrolling, clicking and extracting content from single page javascript apps.
Disclaimer: I'm the extension author.
If you're serious about the subject start by developing a simple chrome extension (from my own experience Chrome throws more verbose extension errors than Firefox): https://developer.chrome.com/extensions/getstarted
Take a look afterwards at the main get-set-fetch background plugins: FetchPlugin (loads an url in a tab and waits for the DOM to stabilize), ExtractUrlPLugin (identifies additional urls to scrape from the current url), ExtractHtmlContentPlugin (actual scraping based on CSS selectors).
There are downsides though. It’s a lot easier to run a scraping script in your favorite language dumping the scraped content into a database than automatically starting the browser, loading the extension, controlling the extension, exporting scraped data to a format like csv, importing that data into a database.
In my opinion, it only makes sense to use a browser extension if you don’t want to automate the data extraction or the page you’re trying to scrape is so javascript heavy it’s easier to automate the extension than writing a scraping script.