Using that, you just need two passes on the array - once to get the initial product of the second half (the entire array) and once again to update the products for each element and find the minimal difference:
Now that I look at it - it isn't. I was probably not paying enough attention when I looked at your original post, and thought that you are calculating the entire products for each index...
The time complexity should be linear, and because every element can change the result you can't have sublinear complexity. So how did you exceed the time complexity? Are they timing the actual run instead of doing complexity analysis?
The trick is that you can deduce the products for the next
X
from the ones of the currentX
without multiplying everything again:Using that, you just need two passes on the array - once to get the initial product of the second half (the entire array) and once again to update the products for each element and find the minimal difference:
How is this different from my second approach?
Now that I look at it - it isn't. I was probably not paying enough attention when I looked at your original post, and thought that you are calculating the entire products for each index...
The time complexity should be linear, and because every element can change the result you can't have sublinear complexity. So how did you exceed the time complexity? Are they timing the actual run instead of doing complexity analysis?
Yes I think the actual run time is considered for timeout.
I'm unsure of what went wrong. That's why I wanted to discuss about this challenge. :)