So I am going to provide a general idea. someone edit the answer if sentence structure is not clear
ok , so how do we know that binary search is applicable here , suppose let's take mid = (l + (r - l)/2) as the length of longest common subarray .
Now if we have a common subarray with length L, then there must be a common subarray with length less than L and if the two array do not have a common subarray of length L they can't have a common subarray of any length greater than L. So now implementing binary search search should be simple , that we check for mid which is our possible length , if there exist
a common subarray of size mid , if yes ,then there exist a possiblity that larger length common subarray may exists so we store this current satisfied length as answer and make l = mid + 1 , to check for more possible larger length and if at some iteration of binary search we see that no common subarray of length mid exist , no meaning of increasing our length so we go for lower length that is r = mid - 1.
Writing code in c++
int l = 1 , r = min(array1.size() , array2.size()); // taking min length of array 1 and array2
int answer = -1;
while(l <= r)
{
int mid = ( l + ( r - l) / 2);
if(check(array1 , array2 , mid))
{
answer = mid;
l = mid + 1;
}
else
r = mid - 1;
}
cout << answer << "\n";
Now problem comes , how do we check that given a length L , and two arrays , if there exists a common subarray in both arrays of this given length L , for that you have to know about hashing which is actually trying to give a unique numerical value to an array , so that it becomes easy to compare two different arrays efficienty , i.e two same arrays are going to have same hash value , and different arrays would have different hash . so different hashing method exists , but as you would have guessed their can be a case where two different array have same hash , which is known as collision , so how do we reduce it we can reduce it by using a strong hash method which reduces the probability of collision . One of these method is a rolling hash , for more general idea , check out about rolling hash on internet.
now in each check of mid in binary search , calculate the rolling hash of all subarray of length mid and store it in datastructure like hashtable or set. then again calculate rolling hash for all subarray of length mid in second array , but this time while calculating , only check if this hash value has already been calculated and stored in your datastructre for subarrays of first array , in hashtable(average look up time is O(1)) or set (average look up time is logarithmic), if yes , then this mid length common subarray exists and you return true to binary search ,but after every checking for window of length mid in second array , you don't find any already existing hash , you return false.
so assuming you take hashtable as a data structure , the total time complexity would be
( array1.size() + array2.size() ) * log( min( array1.size() , array2.size() ) )
because you iterate log(min(array1.size() , array2.size()) times in binary search , and in each iteration , you check the both array by traversing , calculating rolling hash and check in hashtable i.e (array1.size() + array2.size()).