F# solution, ~460 milliseconds runtime. Edit: Previously 620ms when I had the procedures nicely separated. The current results are from going completely imperative. And generating new filters as needed.
About as imperative as possible. But if performance and a sieve is your problem, then imperative mutations are the answer. I actually made an Elm solution first for understand-ability and then translated it into imperative F#. The Elm solution took like 40 seconds to run in the browser. Sooooo much data copying.
openSystem// Max is the number of items to wait before removal// Count is items seen since last removaltypeFilter()=[<DefaultValue>]valmutablepublicMax:int[<DefaultValue>]valmutablepublicCount:int[<EntryPoint>]letmainargs=// pre-allocating capacityletresults=Array.zeroCreate<int>71918// 1 is a givenresults.[0]<-1// index to place the next resultletmutablenextResult=1letiterationCount=1_000_000// experimentally determined 6507 filters for 1,000,000// first filter (odd numbers) built into looping logicletfilterCount=6506letfilters=Array.zeroCreate<Filter>filterCount// index to place the next filterletmutablenextFilter=0lettimer=System.Diagnostics.Stopwatch()timer.Start()letmutablen=3whilen<iterationCountdoletmutablei=0letmutablesurvived=true// check each filter to see if it will discard the number// stop as soon as it is discardedwhilei<nextFilter&&surviveddoletfilter=filters.[i]letnewCount=filter.Count+1// update filter state... this is much faster than moduloiffilter.Max=newCountthenfilter.Count<-0survived<-falseelsefilter.Count<-newCountsurvived<-truei<-i+1// survived the guantletifsurvivedthenresults.[nextResult]<-nnextResult<-nextResult+1//printfn " %i" i// create a new filter, if there is roomifnextFilter<filterCountthenfilters.[nextFilter]<-Filter(Max=n,Count=nextResult)nextFilter<-nextFilter+1n<-n+2// oddstimer.Stop()printfn"Count: %i"nextResultprintfn"Elapsed Milliseconds: %i"timer.ElapsedMillisecondsprintfn"Memory Usage: %i bytes"<|GC.GetTotalMemory(false)printfn"Garbage Collections: %i gen0, %i gen1, %i gen2"<|GC.CollectionCount(0)<|GC.CollectionCount(1)<|GC.CollectionCount(2)0// exit code
Some explanation
The idea is not to copy lists, but to filter out numbers as they come in. Each round of elimination is considered a "filter". The more filters, the lower margin of error in the results. I experimentally determined (by running the code and tweaking the number) the number of filters required for accurate results up to 1 million input numbers is 6507. In other words, 6507 rounds of elimination. You can tell the required margin of error for a given size because once you reach it, increasing the filter count further will not reduce the number of results.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
F# solution, ~460 milliseconds runtime. Edit: Previously 620ms when I had the procedures nicely separated. The current results are from going completely imperative. And generating new filters as needed.
About as imperative as possible. But if performance and a sieve is your problem, then imperative mutations are the answer. I actually made an Elm solution first for understand-ability and then translated it into imperative F#. The Elm solution took like 40 seconds to run in the browser. Sooooo much data copying.
Some explanation
The idea is not to copy lists, but to filter out numbers as they come in. Each round of elimination is considered a "filter". The more filters, the lower margin of error in the results. I experimentally determined (by running the code and tweaking the number) the number of filters required for accurate results up to 1 million input numbers is 6507. In other words, 6507 rounds of elimination. You can tell the required margin of error for a given size because once you reach it, increasing the filter count further will not reduce the number of results.