A divide-and-conquer solution can be done by dividing the array into 2, say A1 and A2. Then, once having recursively solved the problem for the two sub-arrays, you should consider the scenarios in which the optimal solution to the original array may lie.
Option 1: The longest contiguous increasing subsequence is completely in A1, in which case you already found the maximum length, or the relevant answer, or whatever it is you are planning to return.
Option 2: Similarly, the longest contiguous increasing subsequence is entirely in A2.
Option 3: The longest contiguous increasing subsequence is partially in A1 and partially in Array2. In this case, considering A1 is the left portion of the array and A2 is the right portion, you basically have to go left from the intersection until it is not decreasing or you reach the left end of A1. And then you go to right on A2 until it is not increasing or you reach the right end of it.
Among these options you take the one with the greatest length, and you're done.
However, I should note that divide-and-conquer is not the optimal solution to this problem, as it has O(nlogn) time complexity. As mentioned in Jon Bentley's notable book, Programming Pearls, a solution what he calls as the maximum sum contiguous subsequence problem is known to have linear time complexity. That solution may easily be adapted to handle increasing subsequences, instead of the maximum sum.
The algorithm is based on an approach Bentley calls scanning, and it is based on the idea that any subsequence has to end at some point.
The approach is painfully simple, and a Python implementation can be found below.
def maxIncreasing(arr):
maxLength = 1
maxStart = 0
curStart = 0
curLength = 1
for i in range(1, len(arr)):
if arr[i] <= arr[i-1]:
if curLength > maxLength:
maxLength = curLength
maxStart = curStart
curStart = i
curLength = 1
else:
curLength += 1
if curLength > maxLength:
maxLength = curLength
maxStart = curStart
return (maxLength, maxStart)