0

This query gets the output I want. In order to run it I have to run

SET sql_mode = '';

Because otherwise I get an error:

SELECT list is not in GROUP BY clause and contains nonaggregated column 'knownLoss.t1.loss' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by

SELECT 
  t1.klDate AS LDate
  , t1.Category
  , t1.reason AS Reason
  , sum(t1.loss) AS Loss
  , round(t1.loss / t2.loss,2) AS Percentage
FROM 
  deptkl t1
JOIN (
  SELECT 
      Store
    , sum(loss) AS loss
  FROM
    deptkl
  WHERE 
    klDate >= date_sub(SUBDATE(curdate(), WEEKDAY(curdate())), interval 7 day)
AND 
    klDate < SUBDATE(curdate(), WEEKDAY(curdate()))
AND
    Store = 19
AND
    Department = 80
) t2 ON t1.Store = t2.Store

WHERE 
  klDate >= date_sub(SUBDATE(curdate(), WEEKDAY(curdate())), interval 7 day)
  AND 
  klDate < SUBDATE(curdate(), WEEKDAY(curdate()))
  AND
  t1.Store = 19
AND
  Department = 80
GROUP BY
  klDate
, Category
, reason

When I place this into the Dataset and Query Dialog of Jasper Studio, I get the same error and I am unable to use the SET sql_mode = ''; command. Any thoughts? If there is a way to achieve this without using SET sql_mode = '';?

Alex K
  • 22,315
  • 19
  • 108
  • 236
Aaron Martin
  • 397
  • 2
  • 6
  • 17

2 Answers2

1

I'm guessing the error is from this line in your select:

round(t1.loss / t2.loss,2) AS Percentage

Since you GROUP BY clause does not include this column it's somewhat of a coin toss which t1.loss and t2.loss values will be used. In some cases those values happen to always be the same based on your criteria and so you get the correct results regardless, but the db will still complain since it's being asked to return somewhat arbitrary results for those columns. One way to deal with this would be to simply apply an aggregate function to the columns in question like this:

round(min(t1.loss) / min(t2.loss),2) AS Percentage

or...

round(avg(t1.loss) / avg(t2.loss),2) AS Percentage
  • 1
    While this correctly explains the source of the error, the proposed solution may not be logically correct. – Tim Biegeleisen Apr 02 '17 at 06:44
  • @TimBiegeleisen - I agree but it's impossible to know without seeing the dataset. I myself have run into this where the result of a given column is *always* the same in a `group by`, and in that case `min()` or `max()` will at least get rid of the error and still output the same results. a better solution would probably be to adjust the `group by` clause - but only the author can determine this. – But those new buttons though.. Apr 02 '17 at 15:30
  • The OP shouldn't leave all of us guessing like this, but your logic seems as good as any guess right now. – Tim Biegeleisen Apr 02 '17 at 15:32
0

I think you want to do this :

round(sum(t1.loss / t2.loss)/count(*),2)  AS Percentage

this will calculate the sum of the average loss for every records in the result then divide it on the the count of the record of the group it's like average of average.

EDITS:

sorry i made a syntax error now ,it should give the th wanted result and the error is because you are not using aggregate function on a column that is not in group by clause

Charif DZ
  • 14,415
  • 3
  • 21
  • 40