0

Say I have this .csv file which holds data that describes sales of a product. Now say I want a monthly breakdown of number of sales. I mean I wanna see how many orders were received in JAN2005, FEB2005...JAN2008, FEB2008...NOV2012, DEC2012.

Now one very simply way I can think of is count them one by one like this. (BTW I am using logparser to run my queries)

logparser -i:csv -o:csv "SELECT COUNT(*) AS NumberOfSales INTO 'C:\Users\blah.csv' FROM 'C:\User\whatever.csv' WHERE OrderReceiveddate LIKE '%JAN2005%'

My question is if there is a smarter way to do this. I mean, instead of changing the month again and again and running my query, can I write one query which can produce the result in one excel all at one.

Sahil Chaudhary
  • 493
  • 2
  • 10
  • 29

2 Answers2

0

Yes.

If you add a group by clause to the statement, then the sql will return a separate count for each unique value of the group by column.

So if you write:

SELECT OrderReceiveddate, COUNT(*) AS NumberOfSales INTO 'C:\Users\blah.csv'
FROM `'C:\User\whatever.csv' GROUP BY OrderReceiveddate`

you will get results like:

JAN2005 12

FEB2005 19

MAR2005 21

Vulcronos
  • 3,428
  • 3
  • 16
  • 24
0

Assuming OrderReceiveDate is a date, you would format the date to have a year and month and then aggregate:

SELECT date_format(OrderReceiveddate, '%Y-%m') as YYYYMM, COUNT(*) AS NumberOfSales
INTO 'C:\Users\blah.csv'
FROM 'C:\User\whatever.csv' 
WHERE OrderReceiveddate >= '2015-01-01'
GROUP BY date_format(OrderReceiveddate, '%Y-%m')
ORDER BY YYYYMM

You don't want to use like on a date column. like expects string arguments. Use date functions instead.

Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786