I've got a massive dataset (~30 billion rows):
host_id | usr_id | src_id | visit_num | event_ts
where any user from their parent host can visit a page on a source (src_id
), where the source is, say, their phone, tablet, or computer (unidentifiable). The column vis_num
is the ordered number of visits per source per user per host. The column event_ts
captures the timestamp of each visit per source per user per host. An example data set for one host might look like this:
host_id | usr_id | src_id | vis_num | event_ts
----------------------------------------------------------------
100 | 10 | 05 | 1 | 2017-08-01 14:52:34
100 | 10 | 05 | 1 | 2017-08-01 14:56:00
100 | 10 | 05 | 1 | 2017-08-01 14:58:09
100 | 10 | 05 | 2 | 2017-08-01 17:08:10
100 | 10 | 05 | 2 | 2017-08-01 17:16:07
100 | 10 | 05 | 2 | 2017-08-01 17:23:25
100 | 10 | 72 | 1 | 2017-07-29 20:03:01
100 | 10 | 72 | 1 | 2017-07-29 20:04:10
100 | 10 | 72 | 2 | 2017-07-29 20:45:17
100 | 10 | 72 | 2 | 2017-07-29 20:56:46
100 | 10 | 72 | 3 | 2017-07-30 09:30:15
100 | 10 | 72 | 3 | 2017-07-30 09:34:19
100 | 10 | 72 | 4 | 2017-08-01 18:16:57
100 | 10 | 72 | 4 | 2017-08-01 18:26:00
100 | 10 | 72 | 5 | 2017-08-02 07:53:33
100 | 22 | 43 | 1 | 2017-07-06 11:45:48
100 | 22 | 43 | 1 | 2017-07-06 11:46:12
100 | 22 | 43 | 2 | 2017-07-07 08:41:11
Per each source id, a change in visit number implies a log-off time and a subsequent log-on time. Note that activity from different sources may overlap in time.
My goal is to calculate how many (non-new) users logged in at least twice within some time interval, say 45 days. My ultimate end goal is:
1) Identify all users who repeated the critical event at least twice within a certain time period (45 days).
2) For those users, measure the length of time they took between completing the event the first and second time.
3) Plot a cumulative distribution function – i.e., the percentage of users who performed the second event over different time intervals.
4) Identify the time interval at which 80% of users have completed the second event—this is your product usage interval.
Page 23 of:
http://usdatavault.com/library/Product-Analytics-Playbook-vol1-Mastering_Retention.pdf
Here is what I've tried:
with new_users as (
select host_id || ' ' || usr_id as host_usr_id,
min(event_ts) as first_login_date
from tableA
group by 1
)
,
time_diffs as (
select a.host_id || ' ' || a.usr_id as host_usr_id,
a.usr_id,
a.src_id,
a.event_ts,
a.vis_num,
b.first_login_date,
case when lag(a.vis_num) over
(partition by a.host_id, a.usr_id, a.src_id
order by a.event_ts) <> a.vis_num
then a.event_ts - lag(a.event_ts) over
(partition by a.host_id, a.usr_id,
a.src_id
order by a.event_ts)
else null end
as time_diff
from tableA a
left join new_users b
on b.host_usr_id = a.host_id || ' ' || a.usr_id
where a.event_date > current_date - interval '45 days'
and a.event_date > b.first_login_date + interval '45 days'
)
select count(distinct case when time_diff < interval '45 days'
and event_ts > first_login_date + interval '45
days'
then host_usr_id end) as cnt_45
from time_diffs
I've tried multiple other (very different) queries (see below), but performance is definitely an issue here. Joining on date intervals is also a new concept to me. Any help is appreciated.
Another approach:
with new_users as (
select host_id,
usr_id,
min(event_ts) as first_login_date
from tableA
group by 1,2
),
x_day_twice as (
select a.host_id,
a.usr_id,
a.src_id,
max(a.vis_num) - min(a.vis_num) + 1 as num_logins
from tableA a
left join new_users b
on a.host_id || ' ' || a.usr_id = b.host_id || ' ' || b.usr_id
and a.event_ts > b.first_login_date + interval '45 days'
where event_ts >= current_timestamp - interval '1 days' -
interval '45 days' and first_login_date < current_date - 1 - 45
group by 1, 2, 3
)
select count(distinct case when num_logins > 1
then host_id || ' ' || usr_id end)
from x_day_twice