Ctrl + F on chrome opens up a search box that is used to find text on a web page, pdf, etc. It's one of the fastest ones I have seen and decided to...
For further actions, you may consider blocking this person and/or reporting abuse
Great write up on the different string matching algorithms!
The source code for the find in page is actually open source! You can see how it's implemented here: source.chromium.org/chromium/chrom...
Actual implementation: https://source.chromium.org/chromium/chromium/src/+/master:v8/src/strings/string-search.h;l=281;drc=e3355a4a33909a48ebb8614048d90cffc67d287e?q=string%20search&ss=chromium&originalUrl=https:%2F%2Fcs.chromium.org%2F
Looks like they swap in different algorithms based on the context.
OMG ! thanks for sharing :). I read on StackOverflow that somewhere around 2008-2010 when chrome was picking of, they shared how they've implemented a version of Boyer Moore's pattern matching algorithm. But I couldn't find it on youtube.
Very cool!
OMG !! Thanks for reading πππ
π€
That code does not work for a string which is consisting of only the same character. (e.g. pattern = 'TT')
so, after replacing 'skip = Math.max(1,j-map[string[i+j]]);' with 'skip = Math.max(1,j-map[string[i+j].charCodeAt(0)]);', working correctly.
Thanks for pointing out :) Code updated!
well, the combined code is still the same as before.
very interesting!
Thanks for reading :)
Good read
Thanks for reading :)
Hey Akhil! Amazing article! I had always wondered how is chrome able to find matching strings so fast.
One question, is this type of pattern matching slower or faster than using Regular Expressions (say in python)?
It depends on the complexity of the regex you're searching with. Regex engines usually build some sort of state machine from your regex, sort of like a compiler. Then they use that to check against the string. A state-machine based implementation will just be slower because of all of the extra variable changes it's having to do, but with simple enough regexes the compiled state machine might be fairly similar to this.
Regexes also aren't guaranteed to be much faster than a hand-written implementation for things like finding phone numbers, etc. They're pretty darn fast, don't get me wrong, but there might possibly be optimizations that could be made for specific use cases. Regex is just a convenient abstraction. Much like Javascript or Python, it's "just fast enough" for the vast majority of use cases, but for some like these, you need a better implementation for it to be performant.
I am not sure about its speed in python, maybe StackOverflow might help with that but overall regex are faster for dynamic situations like finding all phones numbers and algorithms might be faster for finding a particular phone number in a record of million phone numbers.
Regexes are generally functions, with arbitrary rules as to their composition. For example, "all 10 or 11 digit phone numbers starting with +91" might be expressed in regex as "
(?:\+91|\(\+91\))[ -]?\d{3}[ -]?\d{3}[ -]?\d{3}
", but a compiled function might use many other tricks to find its way through the document, be it a trie (moderately memory intensive) or whatever.Regexes are just a simplified means of expressing functions, with their own grammatical structure.
Simple byte lookups (especially in an ASCII or ASCII compatible document) are dozens if not hundreds or thousands of times faster than composition of a function like that.
Yea, it depends on various factors like how many time regex is being executed etc.
Eg : If your string is 'QABC' and pattern is 'ABC' then the naive algorithm will perform better.
I read somewhere about the progress being made in fast string matching with regex using pattern matching algorithms with them.
That first one is true in js, but in most languages it's false.
Using regex to find string matches is still quite slow, but does work fairly well. In a compiled language, like rust, c, or go, it will be quite consistent, and have a constant time (unless gc interrupts it).
The short of it is: avoid regexes where possible. There are many premade solutions available.
Amazing post!!!!
I'll left here an implementation of BM and KMP I did in C for an university task just in case someone want to dig deeper on this.
github.com/Brugui7/Algorithmics/tr...
That's awesome! Thanks for reading :)
Thanks for reading :)
Boyer-Moore and KMP are both O(m+n) in the worst case. Please fix the typo (or check your references).
Worst case is still O(mn).
Read this : cs.cornell.edu/courses/cs312/2002s...
KMP is definitely O(m+n) even in worst case, because after the table construction (O(m)) it's just a linear scan on the string (O(n)).
Thanks for sharing! Updated!
Agreed, but your article shows O(mn).
This algorithm works well if the alphabet is reasonably big, but not too big. If the last character usually fails to match, the current shift s is increased by m each time around the loop. The total number of character comparisons is typically about n/m, which compares well with the roughly n comparisons that would be performed in the naive algorithm for similar problems. In fact, the longer the pattern string, the faster the search! However, the worst-case run time is still O(nm). The algorithm as presented doesn't work very well if the alphabet is small, because even if the strings are randomly generated, the last occurrence of any given character is near the end of the string. This can be improved by using another heuristic for increasing the shift. Consider this example:
T = ...LIVID_MEMOIRS...
P = EDITED_MEMOIRS
At first, while reading I was confused about the time it takes to match each pattern.
Nice read, let me try implementing it.
Thanks for reading :)
Nice profile picture of gilfoyle π cool post very interesting for a newbe
Thanks for reading :)
Thanks for reading :)
This is interesting !
Thanks for reading :)
Amazing post! Thanks
Thanks for reading :)
Really interesting!
Good one and very informative.
Thanks for reading :)
Step 2 seems flawed. What if a character occurs more than once in the pattern? Then you'll only store the last index.
Yep, that's the beauty of the algorithm since we will store the last index, the "skip" will be large. Since the skip is large if the characters from the end do not match, we will skip even more characters bringing down the overall number of comparisons.
This is interesting. Thanks for sharing.
Thanks alot for reading :)
Good Explanation
Nice one
Thanks for reading :)
thanks
Great article, now it makes me thinking further. How "Search in File.." in Intellij work? :D