DEV Community

Heiko Dudzus
Heiko Dudzus

Posted on

Write a program or script to find Lucky Numbers

Lucky numbers

After the "Happy Numbers" challenge, here are "Lucky Numbers": Some positive integer numbers are lucky, because they pass the sieve of Josephus.

The 1 is lucky by definition.

The successor of 1 is 2. So, every second number gets eliminated:
1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, ...

The next number is 3. Now, every third number gets eliminated:
1, 3, 7, 9, 13, 15, 19, 21, ...

Number 3 is lucky! It's successor is 7. Now, every seventh number gets eliminated. And so on.

Read more about it in this Wikipedia article.

Challenge

1) Write a program or script in your language of choice, that prints the lucky numbers between 1 and n.
2) Try to make it as fast as possible for sieving lucky numbers between 1 and a million. (Perhaps it is sensible to measure the time without printing the results.)

Edit: Consider to start with odd numbers.

I found it especially interesting to make it fast. That seems to primarily depend on the used data structure. It has to meet different requirements that sometimes seemed contradictive. I am really curious, what data structures you use in your implementation.

Verification

There are

  • 23 lucky numbers below 100,
  • 153 are below 1000,
  • 1118 are below 10000,
  • 8772 are below 100000,
  • and among the first million positive integers 71918 are lucky.

The lucky numbers between 1 and 300:

[1, 3, 7, 9, 13, 15, 21, 25, 31, 33, 37, 43, 49, 51, 63, 67, 69, 73, 75, 79, 87, 93, 99, 105, 111, 115, 127, 129, 133, 135, 141, 151, 159, 163, 169, 171, 189, 193, 195, 201, 205, 211, 219, 223, 231, 235, 237, 241, 259, 261, 267, 273, 283, 285, 289, 297]
Enter fullscreen mode Exit fullscreen mode

Top comments (16)

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

F# solution, ~460 milliseconds runtime. Edit: Previously 620ms when I had the procedures nicely separated. The current results are from going completely imperative. And generating new filters as needed.

foo> dotnet build -c Release
...

foo> dotnet bin\Release\netcoreapp2.0\MyConsoleApp.dll
Count: 71918
Elapsed Milliseconds: 456
Memory Usage: 916208 bytes
Garbage Collections: 0 gen0, 0 gen1, 0 gen2

About as imperative as possible. But if performance and a sieve is your problem, then imperative mutations are the answer. I actually made an Elm solution first for understand-ability and then translated it into imperative F#. The Elm solution took like 40 seconds to run in the browser. Sooooo much data copying.

open System

// Max is the number of items to wait before removal
// Count is items seen since last removal
type Filter() =
    [<DefaultValue>] val mutable public Max : int
    [<DefaultValue>] val mutable public Count : int

[<EntryPoint>]
let main args =

    // pre-allocating capacity
    let results =
        Array.zeroCreate<int> 71918
    // 1 is a given
    results.[0] <- 1
    // index to place the next result
    let mutable nextResult = 1

    let iterationCount = 1_000_000
    // experimentally determined 6507 filters for 1,000,000
    // first filter (odd numbers) built into looping logic
    let filterCount = 6506 

    let filters =
        Array.zeroCreate<Filter> filterCount
    // index to place the next filter
    let mutable nextFilter = 0

    let timer =
        System.Diagnostics.Stopwatch()


    timer.Start()


    let mutable n = 3
    while n < iterationCount do
        let mutable i = 0
        let mutable survived = true
        // check each filter to see if it will discard the number
        // stop as soon as it is discarded
        while i < nextFilter && survived do
            let filter = filters.[i]
            let newCount = filter.Count + 1
            // update filter state... this is much faster than modulo
            if filter.Max = newCount then
                filter.Count <- 0
                survived <- false
            else
                filter.Count <- newCount
                survived <- true
            i <- i + 1

        // survived the guantlet
        if survived then
            results.[nextResult] <- n
            nextResult <- nextResult + 1
            //printfn " %i" i

            // create a new filter, if there is room
            if nextFilter < filterCount then
                filters.[nextFilter] <- Filter(Max=n, Count=nextResult)
                nextFilter <- nextFilter + 1

        n <- n + 2 // odds


    timer.Stop()


    printfn "Count: %i" nextResult
    printfn "Elapsed Milliseconds: %i" timer.ElapsedMilliseconds
    printfn "Memory Usage: %i bytes" <| GC.GetTotalMemory(false)
    printfn "Garbage Collections: %i gen0, %i gen1, %i gen2"
        <| GC.CollectionCount(0)
        <| GC.CollectionCount(1)
        <| GC.CollectionCount(2)

    0 // exit code

Some explanation

The idea is not to copy lists, but to filter out numbers as they come in. Each round of elimination is considered a "filter". The more filters, the lower margin of error in the results. I experimentally determined (by running the code and tweaking the number) the number of filters required for accurate results up to 1 million input numbers is 6507. In other words, 6507 rounds of elimination. You can tell the required margin of error for a given size because once you reach it, increasing the filter count further will not reduce the number of results.

Collapse
 
nektro profile image
Meghan (she/her) • Edited

While JavaScript isn't the fastest language out there I got an implementation working as follows: Great challenge!

Finds luck numbers up to 1,000,000 in ~12s

Collapse
 
heikodudzus profile image
Heiko Dudzus

Fine! This runs in about 19s on my Notebook. splice() works in place. filter() generates a copy and is an additional loop. So I thought working out of place with slice() could be better. No way. I didn't even wait for it to end. Your implementation with splice and filter is faster.

Collapse
 
nektro profile image
Meghan (she/her)

My goal is to see if I can make a function* Generator but I haven't been able to figure out a consistent pattern yet that can drip out numbers..

Collapse
 
nektro profile image
Meghan (she/her)

Sorry about deleting my comment, I totally broke my code

So I changed the .splice(j-1, 1, 0) to a direct array access and fixed the j offset, making it 2x faster

function getLuckyNumbersTo(n, d=false) {
    const start = Date.now();
    let lucky = new Array(n).fill(0).map((v,i) => i+1);
    if (d) console.log(lucky);
    for (let i = 2; i < lucky.length;) {
        for (let j = i; j <= lucky.length; j+=i) {
            lucky[j-1] = 0;
        }
        if (d) console.log(i, lucky.map(v => v));
        lucky = lucky.filter(v => v !== 0);
        i = lucky.find(v => v >= i + 1);
    }
    const end = Date.now();
    const duration = end - start;
    console.table({ size:n, length:lucky.length, time:duration });
    console.log(lucky);
    return lucky;
}
Collapse
 
ben profile image
Ben Halpern

I look forward to reading some of the solutions. A bit too beat to do it myself at the moment 😋

Collapse
 
heikodudzus profile image
Heiko Dudzus

This problem made me more familiar with Go. Some nice things to learn, there. I wonder why my custom struct didn't speed the thing up (it didn't slow it down either, compared to Go slices). My Python was slow, I'd like to learn some Numpy.

Slow or fast..., I'm curious.

Collapse
 
heikodudzus profile image
Heiko Dudzus • Edited

Kasey Speakman raised the bar very high very early. :-)

My holidays during the "Kölsche Karneval" in Cologne are over now, so I'll throw one of my implementations in, before the work begins again.

This is the ranking of my attempts, so far:

  • Python (Slices): 157s
  • Haskell with singly-linked lists: 51 s (I didn't expect that to be fast because accessing an element by index is of course in O(n))
  • Java with doubly-linked list with additional state keeping to avoid O(n) indexing: 28 s
  • Go with doubly-linked list with additional state keeping: 10 s
  • Go with appending to slices similar to the Haskell deleteEveryNth function: 10 s
  • Go with array (in-place changes): 6.5 s
  • Haskell with Boxed Vectors: 3 s
  • C with array (in-place changes): 2 s
  • Haskell with Unboxed Vectors: ~600 msec
import Control.Monad.State
import Control.Monad.Loops(untilM)
import qualified Data.Vector.Unboxed as V
import Data.Vector.Unboxed((!)) -- this one unqualified, please!

type Index = Int
type Vec = V.Vector

-- To delete every nth element from a list:
-- The state is the remaining list with elements to delete

-- take n elements and leave the rest as state
takeM :: V.Unbox a => Int -> State (Vec a) (Vec a)
takeM = state . V.splitAt

-- input list in state empty?
end :: V.Unbox a => State (Vec a) Bool
end = get >>= return . V.null

-- take n-1 elements and drop the nth. Repeat until the end of the list.
deleteEveryNth :: V.Unbox a => Int -> Vec a -> Vec a
deleteEveryNth n = V.concat . evalState (takeM (n-1) <* takeM 1 `untilM` end)

-- To iterate the deletion passes with different step widths:
-- The step width can be found at index i, then do a deletion pass over xs

deleteUnlucky :: (Vec Int,Index) -> (Vec Int,Index)
deleteUnlucky (xs,i) = (deleteEveryNth (xs!i) xs, i+1)

outOfRange :: (Vec Int,Index) -> Bool
outOfRange (xs,i) = i >= len || xs!i > len
  where len = V.length xs

luckyNumbers n =
  fst $ until outOfRange deleteUnlucky (V.fromList [1,3..n],1)

main =
  let l = luckyNumbers 1000000
  in print (l, V.length l)

The inner loop is deleteEveryNth, which is done by monadic parsing. The combinator <* in the parsing function takeM (n-1) <* takeM 1 means: First take n-1 elements from the input, then take 1, but return only the n-1 taken at first.

The outer loop is done by until outOfRange deleteUnlucky, both functions being quite straight forward.

The code, btw, is logically exactly the same as the version with linked lists, the only difference are the "V."-functions instead of the equally named list functions, especially V.length (O(1)) instead of length (O(n)) and also important, the O(1) index operator for vectors ! instead of O(n) !! for linked lists.

PS C:\Users\Heiko\Documents\Programmieren\Haskell> .\GluecklicheZahlen-Vector.exe +RTS -s
71918
   4,586,444,408 bytes allocated in the heap
      31,005,008 bytes copied during GC
       9,441,112 bytes maximum residency (35 sample(s))
       2,704,656 bytes maximum slop
              24 MB total memory in use (0 MB lost due to fragmentation)

                                     Tot time (elapsed)  Avg pause  Max pause
  Gen  0      7189 colls,     0 par    0.016s   0.037s     0.0000s    0.0003s
  Gen  1        35 colls,     0 par    0.000s   0.011s     0.0003s    0.0046s

  INIT    time    0.000s  (  0.000s elapsed)
  MUT     time    0.547s  (  0.536s elapsed)
  GC      time    0.016s  (  0.048s elapsed)
  EXIT    time    0.000s  (  0.002s elapsed)
  Total   time    0.562s  (  0.585s elapsed)

  %GC     time       2.8%  (8.1% elapsed)

  Alloc rate    8,386,641,203 bytes per MUT second

  Productivity  97.2% of total user, 91.8% of total elapsed

PS C:\Users\Heiko\Documents\Programmieren\Haskell>

I'd like to get to a C implementation nearly as fast as the Haskell- and the F#-version. I don't know why my C is slower. I'll share my attempts soon...

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

So I pretty much went back and inlined everything in the F# solution and updated my post. Got the time way down. It should very nearly read like especially bad C code if you want to see how it does ported to C.

Your Haskell implementation is quite impressive in performance and terseness.

Collapse
 
heikodudzus profile image
Heiko Dudzus • Edited

Great to hear that. I'm afraid I can't fully claim credit for the performance. That seems an achievement of the author of the vector library (Roman Leshchinskiy) and the authors of the Glasgow Haskell Compiler. ;-)

You sped up your program, so I let the compiler inline deleteEveryNthunsing the compiler pragma {-# INLINE deleteEveryNth #-}

I had a run in 469 ms CPU time. :-D (Of course, it all depends on hardware. But I also wanted to do the next step and profit of inlining.)

I need some more time to go through your code to understand and port it to C. Great.

Thread Thread
 
kspeakman profile image
Kasey Speakman

I added a lot of comments. Hopefully that will help with a C translation. I just noticed you are starting with odds whereas I am filtering them out at cost of extra CPU time. I'll update my version tonight to do that also.

I'm still quite in awe of the Haskell version. It is quite expressive while performing the same. I don't understand all the specific functions or operators, but I "get" it in a general sense. Whereas I had to remove any semblance of human thought to run mine faster.

Thread Thread
 
kspeakman profile image
Kasey Speakman

Turned out starting with odds didn't change the runtime at all really. But it led me to some shorter code that accomplished the same thing.

Collapse
 
nektro profile image
Meghan (she/her)

I also made a Go implementation that does 1,000,000 in 3s

func findLuckyNumbersTo(n int) []int {
    start := time.Now()
    lucky := make([]int, n)
    for i := 0; i < n; i++ {
        lucky[i] = i + 1
    }
    for i := 2; i < len(lucky); {
        for j := i; j <= len(lucky); j += i {
            lucky[j-1] = n + 1
        }

        k := 0
        for j := 0; j < len(lucky); j++ {
            if lucky[j] != n+1 {
                k += 1
            }
        }
        luk_tmp := make([]int, k)
        l := 0
        for j := 0; j < len(lucky); j++ {
            if lucky[j] != n+1 {
                luk_tmp[l] = lucky[j]
                l += 1
            }
        }
        lucky = luk_tmp

        for j := 0; j < len(lucky); j++ {
            if lucky[j] >= i+1 {
                i = lucky[j]
                break
            }
        }
    }
    end := time.Now()
    dur := end.Sub(start)
    fmt.Printf("size:%d,\tlength:%d,\ttime:%f\n", n, len(lucky), dur.Seconds())
    return lucky
}
Collapse
 
heikodudzus profile image
Heiko Dudzus • Edited

1.6 on my Laptop. I could speed this one up to 1.2 by counting every element you set to n+1. Then:

k := len(lucky) - counter

No need for the second inner loop (counting up k), then.

Nice. My best Go attempt on an array was 6.5s. I have to think about, why this is faster.

Collapse
 
joshavg profile image
Josha von Gizycki

As always: happy to provide a Clojure solution:

(defn luckies [n]
  (loop [working-set (range 1 (inc n))
         ln 2
         step 1]
    (if (< ln (count working-set))
      (let [new-ws (keep-indexed
                    (fn [ix item]
                      (when (not= 0 (mod (inc ix) ln)) item))
                    working-set)]
        (recur new-ws (nth new-ws step) (inc step)))
      (println (count working-set)))))

This is just a first draft, happy to have the running code in about half an hour. I bet there's a lot of room for improvements, it calculates the first million numbers in 147 seconds.

Collapse
 
joshavg profile image
Josha von Gizycki

Got it down to still very slow 100 seconds by using the vec function on the sieved result to make it a vector instead of a linked list. This way the nth call inside the recur statement is way faster.

(defn luckies [n]
  (loop [working-set (range 1 (inc n))
         ln 2
         step 1]
    (if (< ln (count working-set))
      (let [new-ws (vec
                    (keep-indexed
                     (fn [ix item]
                       (when (not= 0 (mod (inc ix) ln)) item))
                     working-set))]
        (recur new-ws (nth new-ws step) (inc step)))
      (println (count working-set)))))