It would help to post the schema for the table, as it's difficult to know what data types the columns are, or what your indexes are.
Aside from that, what most stands out is that your col_date field is being created at SELECT time, meaning that you are creating a new/unindexed column for the query, which is being used in the GROUP BY - so you're effectively creating an entire new column row by row for 80m rows, and then table scanning the result for 80m rows to figure out the grouping. I would think about adding a new column of type DATE to table1, and storing your converted timestamp data there permanently. Your GROUP BY should then be able to work more optimally (with proper indexing on the new DATE column). I would also change table2 to type DATE as well, and avoid converting DATE to STRING at all - just do that with the date if/when you need to read it in a different format from the other table.
If you play around with your SELECT statement, I would think if you remove col_date from the SELECT/GROUP BY, the rest of the query should run quite quickly, confirming the calculated column as the issue. If not, I would play around with adding/removing different columns from that SELECT, and playing around with your indexes, to figure out which column(s) specifically slow the query down. Unfortunately very difficult for someone else to test for you, without creating your exact table from the schema, and then having 80m rows of sample data to test with
MORE INFO: Although the question originally posted was on increasing the DB resources, that's not really the correct solution here in my opinion. If you've got a query that is taking 15 minutes to run, and is unable to complete because it is using all the DB resources, increasing those resources is really just a band-aid solution. The table will still get bigger, more resources will again be required, it's not a permanent fix.
Any query that is running for that long and using that many resources needs to be optimised. I don't see anything about what you're trying to do that shouldn't take seconds to run, so long as the data and indexes are optimised so the DB can do the job properly - databases are made to eat this kind of stuff up, and I've seen way more complex queries run on much larger datasets, not have the problems you're having (once optimised).
ALSO, once you have a DATE column in place on table1, can we assume that records are not ever added with older timestamps? If so, you don't even have to execute your query on the entire table every time, you really only need to re-run the query each day for new data, the stats on the historical data would remain constant once calculated. Or you could break up the query to run multiple times for different date ranges, again reducing the resources in use - there's lots of ways to optimise this scenario.