You can do this in O(n) time. Here's a couple of different ways:
Algorithm 1
Observe that the best array with A[i] as the smallest element always includes all the contiguous greater or equal elements on its left and right.
Make an array L[i] and do a forward pass over A[i] while maintaining a stack of every element seen so far that is less than all the elements seen on its right. You can maintain this stack while going through A in linear time, and the top element always lets you fill in L[i] with the index of the first element in A[i]'s subarray. (search "monotonic queue" for a lot of information about this trick)
Similarly you can do a linear time backward pass that lets you fill in R[i] with the index of the last element in every A[i]'s subarray.
Then you can precompute the prefix sums of A, which allow to get the sum of any subarray in constant time.
Finally, multiply every A[i] by the sum from A[L[i]] to A[R[i]] and remember the best result.
Algorithm 2
The second algorithm also depends on the fact that the best array with A[i] as the smallest element always includes all the greater or equal elements on its left and right.
Because of this fact, if you visit the elements of A[i] in decreasing order, then you can always use A[i] as the smallest element, and join it to the contiguous already visited subarrays on its immediate left and right. If we keep track of the sums of these contiguous subarrays, then we can do this traversal in linear time and remember the best result.
Of course, it seems that you have to sort A first, which we cannot do in linear time... but strictly decreasing order isn't the only order that works. If you do a forward pass on A, while maintaining a stack of every element seen so far that is less than all the elements seen on its right (that monotonic queue again), then we can visit the elements of A in the order that they are removed from this stack, and that will still work.
Using that monotonic queue order avoids the original sort and lets us do the whole thing in linear time.
Here's a python implementation of algorithm 2 that returns the best product:
def solve(A):
best = 0
indexes=[]
# lsums[i] = sum from indexes[i]-1 to indexes[i-1]+1
lsums=[]
for i in range(len(A)):
rsum=0
while len(indexes)>0 and A[i]<=A[indexes[-1]]:
small = A[indexes.pop()]
rsum += lsums.pop() + small
val = rsum*small
best = max(val,best)
indexes.append(i)
lsums.append(rsum)
# visit all the items left on the stack
rsum=0
while len(indexes)>0:
small = A[indexes.pop()]
rsum += lsums.pop() + small
val = rsum*small
best = max(val,best)
return best