0

I have a dataset like this, which contains about 1000 passenger IDs and their travel frequency between Temporal 1 and Temporal 12 from Sunday to Saturday. Is that possiable to cluster this dataset by using bi clustering? and how to do it.

ID  T1  T2  T3  T4  T5  T6  T7  T8  T9  T10 T11 T12 Day
1005 0  5   15  1   0   1   20  2   1   1   0   0   Sunday
1005 0  2   1   0   4   1   21  1   0   0   0   0   Monday
1005 0  0   12  0   1   4   1   2   0   1   1   1   Tuesday
1005 0  0   5   1   0   0   6   0   0   2   0   1   Wednesday
1005 0  0   0   2   2   2   2   1   0   2   0   0   Thursday
1005 0  0   0   0   1   1   0   1   0   0   1   0   Friday
1005 0  0   0   0   1   0   0   0   0   1   0   0   Saturday
1006 2  0   0   2   0   0   0   0   1   0   0   0   Sunday
1006 2  0   0   0   0   0   1   1   1   2   0   0   Monday
1006 0  5   0   0   1   2   0   3   1   4   0   0   Tuesday
1006 0  5   0   0   1   0   1   2   2   0   1   1   Wednesday
1006 0  0   2   2   0   0   2   3   3   2   0   0   Thursday
1006 1  0   0   0   2   0   0   3   2   2   1   0   Friday
1006 0  0   0   0   0   0   0   0   0   0   0   0   Saturday
1010 0  0   1   3   4   2   1   4   7   3   0   0   Sunday
1010 2  1   1   1   1   2   3   1   3   4   2   2   Monday
1010 0  3   3   3   5   4   5   2   2   4   6   1   Tuesday
1010 2  1   2   0   3   1   2   1   2   3   6   1   Wednesday
1010 5  1   2   2   2   1   3   1   0   1   3   0   Thursday
1010 2  2   1   2   3   0   3   0   2   2   2   4   Friday
1010 0  1   2   1   1   3   4   3   0   3   2   2   Saturday

I have also tried to convert data set into this (by using melt and for loop) Var1 means "T1 Sunday", Var2 means "T2 Sunday", and so on. Var84 means T12 Saturday.

 ID Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 Var9 Var10 Var11 Var12 Var13 Var14 Var15 Var16 Var17 Var18 Var19 Var20 Var21 Var22 Var23 Var24 Var25 Var26 Var27
1 1005    0    5   15    1    0    1   20    2    1     1     0     0     0     2     1     0     4     1    21     1     0     0     0     0     0     0    12
2 1006    2    0    0    2    0    0    0    0    1     0     0     0     2     0     0     0     0     0     1     1     1     2     0     0     0     5     0
3 1010    0    0    1    3    4    2    1    4    7     3     0     0     2     1     1     1     1     2     3     1     3     4     2     2     0     3     3
  Var28 Var29 Var30 Var31 Var32 Var33 Var34 Var35 Var36 Var37 Var38 Var39 Var40 Var41 Var42 Var43 Var44 Var45 Var46 Var47 Var48 Var49 Var50 Var51 Var52 Var53 Var54
1     0     1     4     1     2     0     1     1     1     0     0     5     1     0     0     6     0     0     2     0     1     0     0     0     2     2     2
2     0     1     2     0     3     1     4     0     0     0     5     0     0     1     0     1     2     2     0     1     1     0     0     2     2     0     0
3     3     5     4     5     2     2     4     6     1     2     1     2     0     3     1     2     1     2     3     6     1     5     1     2     2     2     1
  Var55 Var56 Var57 Var58 Var59 Var60 Var61 Var62 Var63 Var64 Var65 Var66 Var67 Var68 Var69 Var70 Var71 Var72 Var73 Var74 Var75 Var76 Var77 Var78 Var79 Var80 Var81
1     2     1     0     2     0     0     0     0     0     0     1     1     0     1     0     0     1     0     0     0     0     0     1     0     0     0     0
2     2     3     3     2     0     0     1     0     0     0     2     0     0     3     2     2     1     0     0     0     0     0     0     0     0     0     0
3     3     1     0     1     3     0     2     2     1     2     3     0     3     0     2     2     2     4     0     1     2     1     1     3     4     3     0
  Var82 Var83 Var84
1     1     0     0
2     0     0     0
3     3     2     2

By using this dataset, can I cluster them by using K-mean? Actually, I'm not sure about which cluster technique is more suitable for this kind of data set.

ID  Hot temporal topics
1005    var2 var2 var2 var2 var2 var3 var3 var3 var3 var3 var3 var3 var3 var3 var3 var3 var3 var3 var3 var3 var4 var6 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var7 var8 var8 var9 var10 var14 var14 var15 var17 var17 var17 var17 var18 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var19 var20 var27 var27 var27 var27 var27 var27 var27 var27 var27 var27 var27 var27 var29 var30 var30 var30 var30 var31 var32 var32 var34 var35 var36 var39 var39 var39 var39 var39 var40 var43 var43 var43 var43 var43 var43 var46 var46 var48 var52 var52 var53 var53 var54 var54 var55 var55 var56 var58 var58 var65 var66 var68 var71 var77 var82
1006    var1 var1 var4 var4 var9 var13 var13 var19 var20 var21 var22 var22 var26 var26 var26 var26 var26 var29 var30 var30 var32 var32 var32 var33 var34 var34 var34 var34 var38 var38 var38 var38 var38 var41 var43 var44 var44 var45 var45 var47 var48 var51 var51 var52 var52 var55 var55 var56 var56 var56 var57 var57 var57 var58 var58 var61 var65 var65 var68 var68 var68 var69 var69 var70 var70 var71
1010    var3 var4 var4 var4 var5 var5 var5 var5 var6 var6 var7 var8 var8 var8 var8 var9 var9 var9 var9 var9 var9 var9 var10 var10 var10 var13 var13 var14 var15 var16 var17 var18 var18 var19 var19 var19 var20 var21 var21 var21 var22 var22 var22 var22 var23 var23 var24 var24 var26 var26 var26 var27 var27 var27 var28 var28 var28 var29 var29 var29 var29 var29 var30 var30 var30 var30 var31 var31 var31 var31 var31 var32 var32 var33 var33 var34 var34 var34 var34 var35 var35 var35 var35 var35 var35 var36 var37 var37 var38 var39 var39 var41 var41 var41 var42 var43 var43 var44 var45 var45 var46 var46 var46 var47 var47 var47 var47 var47 var47 var48 var49 var49 var49 var49 var49 var50 var51 var51 var52 var52 var53 var53 var54 var55 var55 var55 var56 var58 var59 var59 var59 var61 var61 var62 var62 var63 var64 var64 var65 var65 var65 var67 var67 var67 var69 var69 var70 var70 var71 var71 var72 var72 var72 var72 var74 var75 var75 var76 var77 var78 var78 var78 var79 var79 var79 var79 var80 var80 var80 var82 var82 var82 var83 var83 var84 var84

Moreover, I also tried to convert the frequency into words (e.g. 20 in Var8 and I write Var8 for 20 times).Is that suitable for using LDA to cluster this dataset?

Community
  • 1
  • 1
Meixu Chen
  • 61
  • 4

1 Answers1

0

Don't treat clustering as black box algorithms, ever.

The results will most likely be solving a different problem.

Every clustering algorithm tries to discover a particular kind of structure. K-means for example tries to find the Voronoi partitioning of the data with the least squared deviations. It only makes sense to use k-means if least-squares is the problem that you want to solve.

Therefore, you should first be specific about the patterns you are looking for (which depends on your data and your problem), then identify a clustering algorithm to find such patterns.

So, what patterns are you looking for, and how can you compute the quality of a pattern?

Has QUIT--Anony-Mousse
  • 76,138
  • 12
  • 138
  • 194