DEV Community

Cover image for AoC Day 5: Alchemical Reduction

AoC Day 5: Alchemical Reduction

Ryan Palo on December 05, 2018

Alright! We got through Day 4. It's done. We completed the tasks given to us. For most of us, it is dead to us and we will speak of it no more....
Collapse
 
aspittel profile image
Ali Spittel • Edited

Dumb order of operations mistake got me to this point 🙃was missing the parens around the last half of the conditional since like 12:20. This mirrors the classic stack match parentheses problem.

My solution is kinda pretty though:

with open('input.txt', 'r') as f:
    text = ''
    for line in f:
        text += line.strip()

def react(text):
    stack = []
    for letter in text:
        last = stack[-1] if stack else None
        if letter != last and (last == letter.upper() or last == letter.lower()):
            stack.pop()
        else:
            stack.append(letter)
    return len(stack)

# A1
print(react(text))

# A2
possibilities = set(text.lower())
print(min(react(text.replace(p, '').replace(p.upper(), '')) for p in possibilities))
Collapse
 
thejessleigh profile image
jess unrein • Edited

My first inclination was to do this recursively. Had to fudge it a bit because doing a truly recursive solution exceeds the max recursive depth :(

Here's my Python for part 1. It's very slow. Gonna look into refactoring to a regex or list comprehension solution for part 2. Kind of ready to move out of string manipulation land either way!

def polymer_compression(polymer):
    components = list(polymer)
    compressed_polymer, changes = find_sets(components)
    while changes is True:
        compressed_polymer, changes = find_sets(compressed_polymer)

    return len(compressed_polymer)

def find_sets(components):
    prev_val = None
    prev_case = None
    changes = False
    for index, component in enumerate(components):
        current_val = ord(component.lower())
        current_case = component.istitle()
        if current_val == prev_val and current_case != prev_case:
            components.pop(index)
            components.pop(index - 1)
            changes = True
            break
        prev_val = current_val
        prev_case = current_case
    return components, changes

print(polymer_compression(open('input.txt', 'r').read()))
Collapse
 
kritner profile image
Russ Hammett

Wow, that stack thing is clever! It didn't even occur to me that you could evaluate as you go in a single pass through the polymer. Implementing the stack in c# (from the impl i posted below) went from ~2:50 => 268ms :O

Collapse
 
aspittel profile image
Ali Spittel

Cool! Yeah -- it's kind of a play on this classic problem, which is how I recognized it!

Collapse
 
bjarnemagnussen profile image
Bjarne Magnussen • Edited

Very slick solution!

Maybe most already know... But, parsing the polymer string as a string and using str.replace twice as you do is still much faster than using polymer as a list and using a list-comprehension to remove units (35% slower).

E.g.

# Faster:
print(min(react(text.replace(p, '').replace(p.upper(), '')) for p in possibilities))

# Slower:
text = list(text)
print(min(react([unit for unit in text if unit not in (p, p.upper())]) for p in possibilities))
Collapse
 
easyaspython profile image
Dane Hillard

Wow, I did not see the stack-based solution in there at all. I tried the recursive approach and then one similar to @thejessleigh and stopped there! At least I enjoyed this one more than Day 4!

Collapse
 
ngleich profile image
Nicolas Gleichgerrcht

I went with RegExp solution. I created a regexp for all the reactions and delete everyting, then iterate until no more matches

const polymer = readFileSync('./data', {encoding: 'utf8'}).trim()

// A-Z
const upperLetters = [...Array(26)].map((_,i) => String.fromCharCode(65+i))

// A RegExp of the form Aa|...|Zz|aA|...|zZ
const regexp = new RegExp(
  upperLetters
    .map(upper => `${upper}${upper.toLocaleLowerCase()}`)
    .concat(upperLetters.map(upper => `${upper.toLocaleLowerCase()}${upper}`))
    .join("|"), "g")

// while there is a match for the regexp replace all and retry. returns the legnth
const reactToPolymer = polymerToReact => {
  while(regexp.test(polymerToReact)) {
    polymerToReact = polymerToReact.replace(regexp, "")
  }
  return polymerToReact.length
}

console.log(reactToPolymer(polymer))

Time: 0.265s

For part b y reuse all the code but add this

const reactions = upperLetters.map(letter => reactToPolymer(polymer.replace(new RegExp(letter, "gi"), "")))

console.log(Math.min(...reactions))

Time: 3.693s

I find the part b is a little slow using this approach

Collapse
 
kritner profile image
Russ Hammett • Edited

this one was easy (to me!) in comparison to yesterdays, might even be the easiest thus far. All that being said, DAMNNNNNN is part 2 slow, at least with my current impl.

00:07 run time for part 1
02:50 run time for part 2 :OOOOO

public class AlchemicalReduction
    {
        public string ReducePolymer(string polymer)
        {
            while (true)
            {
                StringBuilder sb = new StringBuilder();
                var previousChar = "";

                for (int i = 0; i < polymer.Length; i++)
                {
                    string currentChar = new string(new[] { polymer[i] });

                    // Same letter
                    if (currentChar.Equals(previousChar, StringComparison.OrdinalIgnoreCase))
                    {
                        // different case
                        if (!currentChar.Equals(previousChar, StringComparison.Ordinal))
                        {
                            // Remove the last character from the builder, don't add this character
                            sb.Remove(i - 1, 1);
                            // Add the remaining characters to the builder
                            sb.Append(polymer.Substring(i + 1, polymer.Length - i - 1));
                            // reset the previous char for next entry into for loop
                            previousChar = "";
                            break;
                        }
                    }

                    // Record new previous char, append the current char to the builder
                    previousChar = currentChar;
                    sb.Append(currentChar);
                }

                // Completed for loo pass, build the string
                var newString = sb.ToString();

                // break out of while loop if they're the same string (has been reduced by maximum amount)
                if (polymer == newString)
                {
                    break;
                }

                // Work with the newly modified string within the for loop
                polymer = newString;
            }

            return polymer;
        }

        public string FullyReducePolymerByEliminatingSingleUnit(string polymer)
        {
            var originalPolymer = polymer;
            var normalizedPolymer = originalPolymer.ToUpper();

            // get the individual "types" within the polymer
            var groupsOfTypes = normalizedPolymer
                .GroupBy(gb => gb)
                .Select(s => s.Key);

            // builds new list of potential polymers, removing a single type from the chain in each run
            List<string> newPotentialPolymers = new List<string>();
            foreach (var s in groupsOfTypes)
            {
                // Removes a single type from the potential polymer
                var charToRemove = new string(new[] { s });
                var regex = new Regex(charToRemove, RegexOptions.IgnoreCase);

                newPotentialPolymers.Add(regex.Replace(originalPolymer, ""));
            }

            // Run the new potential polymers
            List<string> reducedPolymers = new List<string>();
            foreach (var potentialPolymer in newPotentialPolymers)
            {
                reducedPolymers.Add(ReducePolymer(potentialPolymer));
            }

            // return the smallest one
            var minLengthPolymer = reducedPolymers.Min(m => m.Length);
            return reducedPolymers.Where(w => w.Length == minLengthPolymer).First();
        }
    }

Didn't think about the stack approach til seeing other's solutions, wow so much faster! Didn't really occur to me you could accomplish the whole reduction in a single pass through the polymer.

Part 1 went from 00:07 -> 12ms
Part 2 went from 02:50 -> 269ms

public string ReducePolymer(string polymer)
        {
            // Refactored to stack with input from others solutions at: 
            // https://dev.to/rpalo/aoc-day-5-alchemical-reduction-5b2f/comments

            Stack<string> stack = new Stack<string>();
            foreach (var c in polymer)
            {
                var s = new string(new[] { c });

                // the top item in the stack (or empty string if we haven't yet added to stack)
                var top = stack.Count == 0 ? string.Empty : stack.Peek();

                // if the top item is the same letter, 
                // but different case than the currently evaluated character,
                // remove the top item from the stack, and don't add the current character.
                if (top != "" && top.ToLower() == s.ToLower() && top != s)
                {
                    stack.Pop();
                }
                // No match, add the character to the stack
                else
                {
                    stack.Push(s);
                }
            }

            // Is there a better way to project the stack back into a contiguous string?
            var sb = new StringBuilder();
            while(stack.Count > 0)
            {
                sb.Append(stack.Pop());
            }

            return sb.ToString();
        }
Collapse
 
shahor profile image
Shahor

I wrote this solution pretty quickly... and then kept having the wrong value.

I went down a rabbit hole involving hexdump and such because I was pretty confident in what the code was supposed to do and didn't see a valid result come up. It drove me crazy, but finally after way more time digging than I'd like to admit, here's my solution:

import Fs from "fs"
import Path from "path"

const CASE_ASCII_OFFSET = 32

let input = Fs.readFileSync(Path.join(__dirname, "input.txt"))
    .toString()
    .trim()

for (let i = 0; i < input.length; i++) {
    const currentValue = input.charAt(i)
    const nextValue = input.charAt(i + 1)

    // reached the end
    if (nextValue === undefined) {
        continue
    }

    const isSameTypeAndOppositePolarity =
        Math.abs(currentValue.charCodeAt(0) - nextValue.charCodeAt(0)) ===
        CASE_ASCII_OFFSET

    if (isSameTypeAndOppositePolarity) {
        input = input.slice(0, i) + input.slice(i + 2)
        // Coming back to previous position but since it's going to be
        // incremented by the for loop, let's take a supplementary step
        i = Math.max(-1, i - 2)
    }
}

console.log(input.length)

I'll try and find some time to write up what problem I encountered

Collapse
 
choroba profile image
E. Choroba • Edited

I reached for regular expressions in Perl. I wasn't sure about the performance, but 0.4s for the part 2 seemed enough.

The regex is constructed dynamically and simply lists all the possible pairs to remove.

#!/usr/bin/perl
use warnings;
use strict;
use feature qw{ say };

my $regex = join '|', map { +"$_\u$_", "\u$_$_" } 'a' .. 'z';
                                                  # fix highlighting bug: '
chomp( my $input = <> );

1 while $input =~ s/$regex//g;
say length $input;

For the part 2, I discovered you can start from the already reduced polymer, which improved the performance more than 10 times.

#!/usr/bin/perl
use warnings;
use strict;
use feature qw{ say };

my $regex = join '|', map { +"$_\u$_", "\u$_$_" } 'a' .. 'z';
                                                  # fix highlighting bug: '
chomp( my $input = <> );

1 while $input =~ s/$regex//g;
my $min = length $input;
for my $letter ('a' .. 'z') {
    next unless $input =~ /$letter/i;

    my $removed = $input =~ s/$letter//gri;
    1 while $removed =~ s/$regex//g;
    $min = length $removed if length $removed < $min;
}

say $min;
Collapse
 
deciduously profile image
Ben Lovy • Edited

...welp, that was a solid "d'oh" moment, opening this thread. Hindsight is 20/20.

Here's the brute force way in F#. I'll have to come back to it after work to implement the smart way.

module util =
  let getInput inputFile =
    System.IO.File.ReadAllText(inputFile)

  let doesReact first second =
    (System.Char.ToUpper first = System.Char.ToUpper second) && first <> second

  let rec reactString altered result input =
    match input with
    | [] -> if altered then reactString false "" (string result |> List.ofSeq) else result
    | [a] -> if altered then reactString false "" (string result + string a |> List.ofSeq) else result + string a
    | head::next::tail ->
      if doesReact head next then
        reactString true result tail
      else
        reactString altered (result + string head) ([next] @ tail)

module part1 =
  open util

  let execute inputFile =
    getInput inputFile
    |> List.ofSeq
    |> reactString false ""
    |> String.length

module part2 =
  open util
  let cleanInput targetPair input =
    List.filter (fun el -> el <> System.Char.ToUpper targetPair && el <> System.Char.ToLower targetPair) input
  let execute inputFile =
     let input = getInput inputFile |> List.ofSeq
     let allPairs = [ for l in 'a' .. 'z' do yield l ]
     let res = List.map (fun el -> (el, reactString false "" (cleanInput el input))) allPairs
     List.sortBy (fun el -> String.length (snd el)) res
     |> List.head
     |> snd
     |> String.length
Collapse
 
deciduously profile image
Ben Lovy • Edited

Yep, stack-based trick worked just fine:

  let reactQuickly input =
    Seq.fold (fun s c ->
      let last = if Array.length s > 0 then Some (Array.last s) else None
      match last with
      | Some x ->
        if c <> x && (x = System.Char.ToUpper c || x = System.Char.ToLower c) then
          Array.sub s 0 (Array.length s - 1)
        else Array.append s [| c |]
      | None -> Array.append s [| c |]) [| |] input
        |> Array.length

Replacing reactString with reactQuickly was enough to bring the total runtime from several (many) minutes to around 3 seconds.

Using the tip from @choroba and reducing everything first would probably speed it up even more.

Collapse
 
rpalo profile image
Ryan Palo

I originally had an iterative, while-loopy, pointer-based solution, and that ran OK, but like a lot of people, I realized that it's basically an extended version of the "Matching Parentheses" problem, and that a stack would let me iterate through the list more with less backward steps. Now it runs pretty quick!

Rust's lack of easy handling of upper- and lowercase characters kind of bummed me out.

Part 1

/// Day 5: Alchemical Reduction

use std::collections::HashSet;

// Part 1: How many units remain after fully reacting a polymer

/// Reduce down a polymer by dropping any mer pairs
/// 
/// A mer pair is any lowercase letter and its corresponding uppercase
/// letter, adjacent to each other
/// Optionally, provide a "drop_mer" to drop unconditionally, case insenitive
/// pair or not
pub fn reduce(polymer: &str, drop_mer: Option<char>) -> String {
    let mut result: Vec<char> = Vec::new();
    for c in polymer.chars() {
        if matches_drop_mer(c, drop_mer) { continue; }
        if result.last().is_some() && is_polymer_pair(*result.last().unwrap(), c) {
            result.pop();
        } else {
            result.push(c);
        }
    }
    result.into_iter().collect()
}

fn is_polymer_pair(first: char, second: char) -> bool {
    (first.is_lowercase() && second.is_uppercase() && 
        second.to_lowercase().next().unwrap() == first) ||
    (first.is_uppercase() && second.is_lowercase() &&
        second.to_uppercase().next().unwrap() == first)
}

fn matches_drop_mer(c: char, drop_mer: Option<char>) -> bool {
    drop_mer.is_some()
    && c.to_lowercase().next().unwrap() == drop_mer.unwrap().to_lowercase().next().unwrap()
}

Part 2

For part 2, I didn't iterate over all 26 letters of the alphabet no matter what, I used a set to figure out what characters were in the input and only loop over those. In this case, it doesn't help, because the input has all 26 anyhow. Oh well. I learned a lot about .collect() today.


// Part 2: Figure out which polymer, when removed, allows the most
// compacting, remove it, and return the length of the shortest polymer
// after compaction.

/// Optimizes a polymer by figuring out which *one* mer is inhibiting
/// reduction the most and removing it.  The reduced string is returned
pub fn optimize(polymer: &str) -> String {
    let possible_mers: HashSet<char> = polymer.to_lowercase().chars().collect();
    let mut result_candidates: Vec<String> = Vec::new();

    for mer in possible_mers.into_iter() {
        result_candidates.push(reduce(polymer, Some(mer)));
    }

    result_candidates.into_iter()
        .min_by_key(|candidate| candidate.len())
        .expect("No result candidates found.")
}
Collapse
 
carlymho profile image
Carly Ho 🌈

PHP

Second part looks kind of long because I figured it would be faster to just declare the letters rather than generate the letters programmatically. I'm pretty sure there's a way to improve performance here, since this has a long-ish runtime, but I wasn't up to optimizing performance at midnight, haha. I might fiddle with this some more to see if I can do better.

Part 1:

<?php
$polymer = str_split(trim(file_get_contents($argv[1])));
do {
  $destruction = false;
  for ($i = 0; $i < count($polymer)-1; $i++) {
    $val = $polymer[$i];
    $next = $polymer[$i+1];
    if (($val == strtoupper($next) && strtoupper($next) != $next) || ($val == strtolower($next) && strtolower($next) != $next)) {
      $destruction = true;
      array_splice($polymer, $i, 2);
      break;
    }
  }
} while ($destruction);
echo count($polymer);
die(1);

Part 2:

<?php
$basepolymer = str_split(trim(file_get_contents($argv[1])));
$polymer = $basepolymer;
$shortest;
$units = array(
  array('a', 'A'),
  array('b', 'B'),
  array('c', 'C'),
  array('d', 'D'),
  array('e', 'E'),
  array('f', 'F'),
  array('g', 'G'),
  array('h', 'H'),
  array('i', 'I'),
  array('j', 'J'),
  array('k', 'K'),
  array('l', 'L'),
  array('m', 'M'),
  array('n', 'N'),
  array('o', 'O'),
  array('p', 'P'),
  array('q', 'Q'),
  array('r', 'R'),
  array('s', 'S'),
  array('t', 'T'),
  array('u', 'U'),
  array('v', 'V'),
  array('w', 'W'),
  array('x', 'X'),
  array('y', 'Y'),
  array('z', 'Z')
);
foreach ($units as $unit) {
  $polymer = array_filter($polymer, function($c) {
    global $unit;
    if (in_array($c, $unit)) {
      return false;
    }
    return true;
  });
  do {
    $destruction = false;
    for ($i = 0; $i < count($polymer)-1; $i++) {
      $val = $polymer[$i];
      $next = $polymer[$i+1];
      if (($val == strtoupper($next) && strtoupper($next) != $next) || ($val == strtolower($next) && strtolower($next) != $next)) {
        $destruction = true;
        array_splice($polymer, $i, 2);
        break;
      }
    }
  } while ($destruction);
  if (!$shortest || count($polymer) < $shortest) {
    $shortest = count($polymer);
  }
  echo $shortest;
  $polymer = $basepolymer;
}

echo $shortest;
die(1);
Collapse
 
bjarnemagnussen profile image
Bjarne Magnussen • Edited

I am using Advent of Code to learn Golang, and here is the solution I came up with. Suggestions for improvements are always welcome!

I was first using a list which would go from left to right and remove pairs as they were found reacting, shifting one step to the left and continue finding pairs that react. However, this could be simplified by using Ali Spittel's stack solution in Python.

With this inspiration, here is my Golang solution. I have added timing functions just to print out the running time as I have compared them to Python it is finally a speed up using Golang for the solution in this case!

package main

import (
    "bufio"
    "fmt"
    "os"
    "strings"
    "time"
    "unicode"
)

// readLines reads a whole file into memory
// and returns a slice of its lines.
func readLines(path string) ([]string, error) {
    file, err := os.Open(path)
    if err != nil {
        return nil, err
    }
    defer file.Close()

    var lines []string
    scanner := bufio.NewScanner(file)
    for scanner.Scan() {
        lines = append(lines, scanner.Text())
    }
    return lines, scanner.Err()
}

func invertChar(c rune) rune {
    if unicode.ToUpper(c) == c {
        return unicode.ToLower(c)
    }
    return unicode.ToUpper(c)
}

func react(polymer string) int {
    stack := []rune{}
    var top rune
    for _, unit := range polymer {
        if len(stack) == 0 {
            // Initialise stack:
            stack = append(stack, unit)
            continue
        }
        top = stack[len(stack)-1]
        if top == invertChar(unit) {
            // Consume last unit:
            stack = stack[:len(stack)-1]
        } else {
            stack = append(stack, unit)
        }
    }
    return len(stack)
}

func main() {
    data, err := readLines("input")
    if err != nil {
        panic(err)
    }

    polymerString := data[0]

    // Part 1
    start := time.Now()
    fmt.Println(react(polymerString))
    elapsed := time.Since(start)
    fmt.Println(elapsed)

    // Part 2:
    start = time.Now()
    var reduced string
    minReact := -1
    charSet := "ABCDEFGHIJKLMNOPQRSTUVXYZ"
    for _, c := range charSet {
        reduced = strings.Replace(polymerString, string(c), "", -1)
        reduced = strings.Replace(reduced, string(unicode.ToLower(c)), "", -1)
        r := react(reduced)
        if minReact == -1 || r < minReact {
            minReact = r
        }
    }
    fmt.Println(minReact)
    elapsed = time.Since(start)
    fmt.Println(elapsed)

}
Collapse
 
rpalo profile image
Ryan Palo

Nice!

Collapse
 
jbristow profile image
Jon Bristow

Kotlin Solution

A nice easy one! Just fold into a stack and bada-bing-bada-boom!

Part 1

I started with String and String.last. This got me the solution, but it turned out to be really slow. Here's my updated code using a Java LinkedList instead, which is multiple orders of magnitude faster.

fun String.react() =
    fold(LinkedList<Char>()) { acc, c ->
        when {
            acc.isEmpty() -> acc.push(c)
            acc.peek() == c -> acc.push(c)
            acc.peek().equals(c, ignoreCase = true) -> acc.push(c)
            else -> acc.pop()
        }
        acc
    }

fun answer1(input: String) = input.react().count()

Part 2

This one seemed a little too simple at first, but my brute force implementation worked in about a 1500ms (still noticeably slow). After checking that my answer was correct, I did the stack optimization above which pulled me back down to 80ms or so.

fun answer2(input: String) =
    ('a'..'z').zip('A'..'Z')
        .map { (lc, uc) ->
            input.filterNot { it == lc || it == uc }
                .react()
                .count()
        }.min()

Overall, this was a nice mid-week breather, and I'm happy to spend more of my thought cycles doing my OSCP lab work.

Collapse
 
themindfuldev profile image
Tiago Romero Garcia • Edited

JavaScript solution

For this solution I used a stack. If the unit on top of the stack can react to the incoming unit, I just pop it out of the stack. Otherwise, I push it and move on.

reader.js

const fs = require('fs');
const readline = require('readline');

const readLines = (file, onLine) => {
    const reader = readline.createInterface({
        input: fs.createReadStream(file),
        crlfDelay: Infinity
    });

    reader.on('line', onLine);

    return new Promise(resolve => reader.on('close', resolve));
};

const readFile = async file => {
    const lines = [];
    await readLines(file, line => lines.push(line));  
    return lines;
}

module.exports = {
    readLines,
    readFile
};

05-common.js

const reactPolymer = polymer => {
    let stack = [];
    for (let char of polymer.split('')) {
        const top = stack[stack.length - 1];
        if (top && top.toLowerCase() === char.toLowerCase() && top !== char) {
            stack.pop();
        }
        else {
            stack.push(char);
        }
    }

    return stack;
}

module.exports = {
    reactPolymer
};

05a.js

const { readFile } = require('./reader');
const { reactPolymer } = require('./05-common');

(async () => {
    const lines = await readFile('05-input.txt');

    const polymer = reactPolymer(lines[0]);

    console.log(`The remaining units are ${polymer.length}`);
})();

05b.js

const { readFile } = require('./reader');
const { reactPolymer } = require('./05-common');

const detectUnitTypes = polymer => {
    const existence = new Set();
    return polymer.toLowerCase().split('').filter(unit => {
        if (existence.has(unit)) {
            return false;
        }
        existence.add(unit);
        return true;
    });
};

(async () => {
    const lines = await readFile('05-input.txt');

    const unitTypes = detectUnitTypes(lines[0]);

    const polymersWithoutUnit = new Map();
    for (let unit of unitTypes) {
        const polymer = reactPolymer(lines[0].replace(new RegExp(unit, 'ig'), ''));
        polymersWithoutUnit.set(unit, polymer.length);
    }

    const shortestPolymerLength = Math.min(...polymersWithoutUnit.values());

    console.log(`The shortest polymer length is ${shortestPolymerLength}`);
})();
Collapse
 
martyhimmel profile image
Martin Himmel

I'm a bit behind on these, but catching up this weekend! Day 5 is my favorite so far. 😁

<?php
$input = trim(file_get_contents('./input.txt'));

function reduce_polymer($str) {
    $index = strlen($str);
    while ($index > 0) {
        $index--;
        if ($str[$index] == $str[$index + 1]) {
            continue;
        }
        if (strtolower($str[$index]) == strtolower($str[$index + 1])) {
            $str = substr_replace($str, '', $index, 2);
        }
    }
    return strlen($str);
}

echo 'Initial reduction: ' . reduce_polymer($input) . PHP_EOL;

$counts = [];
foreach (range('a', 'z') as $letter) {
    $copy = $input;
    $copy = str_ireplace($letter, '', $copy);
    $counts[$letter] = reduce_polymer($copy);
}

echo 'Shortest polymer length: ' . min($counts);
Collapse
 
mustafahaddara profile image
Mustafa Haddara • Edited

I actually managed to complete Part A with an O(n2) algorithm, before realizing that would be too slow for part B...then I realized I could use a stack to implement an O(n) algorithm.

function reduction(chain)
    chain = reduction_one_pass(chain)
    return length(chain)
end

function reduction_one_pass(chain)
    result = [chain[1]]
    i = 2
    while i <= length(chain)
        a = result[end]
        b = chain[i]
        i+=1

        while a != b && uppercase(a) == uppercase(b)
            deleteat!(result, length(result))  # remove a

            # get new a,b
            if length(result) > 0
                a = result[end]
            else
                a = ""
            end
            b = chain[i]
            i+=1
        end

        # b doesn't match last char on stack
        push!(result, b)
    end
    return join(result, "")
end

function main()
    filename = ARGS[1]  # julia arrays are 1-indexed!
    input = split(read(filename, String), "\n")[1]
    test_input = "dabAcCaCBAcCcaDA"

    println(reduction(input))
end

main()

Part B is iterating through all letters in the alphabet, doing the string replace and then, doing the same reduction as Part A, while tracking the minimum reduced length. I feel like there's a better way to do that, but...never figured it out.

function find_biggest_reduction(chain)
    smallest = -1

    # there's gotta be a pattern here but I can't quite figure it out
    # so we're brute forcing it
    char_A = Int("a"[1])
    for i in char_A:char_A+25
        letter = Char(i)
        sub = replace(replace(chain, letter=>""), uppercase(letter)=>"")
        reduced = reduction_one_pass(sub)
        println("$letter -> length: $(length(reduced))")
        if length(reduced) < smallest || smallest == -1
            smallest = length(reduced)
        end
    end
    return smallest
end

I logged my output and saw that, for my input, v reduced far better than anything else.

a -> length: 10432
b -> length: 10448
c -> length: 10510
d -> length: 10484
e -> length: 10450
f -> length: 10528
g -> length: 10424
h -> length: 10490
i -> length: 10480
j -> length: 10480
k -> length: 10444
l -> length: 10386
m -> length: 10426
n -> length: 10454
o -> length: 10476
p -> length: 10412
q -> length: 10476
r -> length: 10420
s -> length: 10426
t -> length: 10452
u -> length: 10456
v -> length: 4684
w -> length: 10468
x -> length: 10366
y -> length: 10476
z -> length: 10486

My gut tells me there's some pattern in the input that would tell you which char to remove, and that way you only have to do the reduction once (as opposed to 26 times).

Collapse
 
rpalo profile image
Ryan Palo

I looked at that, but when it wasn't definitively the letter that showed up the most, I didn't look very much further.

I bet you're right though, there's some "clustering factor" or something that would tip you off for which letter is most productive.

Collapse
 
ballpointcarrot profile image
Christopher Kruse • Edited

Probably won't get an explanatory blog post on this one today, and need to catch up to finish/submit yesterday's, but here's my Clojure solution for Day 5 (see gist.github.com/ballpointcarrot/7e...

(ns aoc.aoc5)

(defn polymer-drop [[c1 c2]]
  (cond
    (= c1 c2) false
    (or (nil? c1) (nil? c2)) false
    (= (Character/toLowerCase c1) (Character/toLowerCase c2)) true
    :else false))

(defn shrink [input]
  (loop [shrunk [] chars-to-test (take 2 input) left (drop 2 input)]
    (cond
      (and (empty? left) (every? nil? chars-to-test)) (apply str shrunk)
      (nil? (first chars-to-test)) (recur shrunk [(last chars-to-test) (first left)] (rest left))
      (polymer-drop chars-to-test) (if (empty? shrunk)
                                     (recur shrunk (take 2 left) (drop 2 left))
                                     (recur (pop shrunk) [(last shrunk) (first left)] (rest left)))
      :else (recur (conj shrunk (first chars-to-test)) [(last chars-to-test) (first left)] (rest left)))))

(defn remove-char
  "Remove all instances of a character (case-insensitive)
   from a string"
  [string chr]
  (apply str (remove #(or (= % (Character/toUpperCase chr)) (= % (Character/toLowerCase chr))) string)))

(defn char-range
  [start end]
  (map char (range (int start) (inc (int end)))))

(defn find-shortest-polymer [input-string]
  (apply min (pmap #(-> input-string
                        (remove-char %)
                        (shrink)
                        (count)) (char-range \a \z))))
Collapse
 
sneens profile image
sneens

Hey, here's my solution with ruby.

DATA = File.open('input.txt')
main_data = DATA.read
char_hash = Hash.new { |h, k| h[k] = [] }

def reduce_string(string)
  loop do
    last_length = string.size
    ('a'..'z').each do |letter|
      string.gsub!("#{letter}#{letter.upcase}", '')
      string.gsub!("#{letter.upcase}#{letter}", '')
    end
    break if last_length == string.size
  end
  string.size
end

def remove_letters(string, letter)
  string = string.dup
  string.gsub!(letter, '')
  string.gsub!(letter.upcase, '')
end

('a'..'z').each do |letter|
  char_hash[letter] = reduce_string(remove_letters(main_data, letter))
end

puts reduce_string(main_data)
puts char_hash.values.min
Collapse
 
rpalo profile image
Ryan Palo

Nice and clean!

Collapse
 
yordiverkroost profile image
Yordi Verkroost

My day 5 solutions in Elixir. I'm actually pretty happy with how the code turned out this time, might be the cleanest of all days up until now.

The main idea of the implementation of part one is to put characters on a stack, and then compare the head of the stack with the next element to determine if the head and the new element should stay or go.

Part two is a wrapper around the first part, where the input is preprocessed (e.g. units are removed) before being fed to the implementation of the first part. The units to remove are based on unicode numbers.

Common:


defmodule AoC.DayFive.Common do
  @lower_upper_unicode_difference 32

  def read_input(path) do
    path
    |> File.read!()
    |> String.graphemes()
  end

  def process(polymer) do
    polymer
    |> Enum.reduce([], fn unit, result -> process_unit(unit, result) end)
  end

  def get_lower_upper_unicode_difference() do
    @lower_upper_unicode_difference
  end

  defp process_unit(unit, []) do
    [unit]
  end

  defp process_unit(unit, polymer) do
    [head | rest] = polymer
    <<h::utf8>> = head
    <<u::utf8>> = unit

    case abs(h - u) do
      @lower_upper_unicode_difference -> rest
      _ -> [unit | polymer]
    end
  end
end

Part one:

defmodule AoC.DayFive.PartOne do
  alias AoC.DayFive.Common

  def main() do
    "lib/day5/input.txt"
    |> Common.read_input()
    |> Common.process()
    |> Enum.count()
  end
end

Part two:

defmodule AoC.DayFive.PartTwo do
  alias AoC.DayFive.Common

  @lower_a 97
  @lower_z 122

  def main() do
    "lib/day5/input.txt"
    |> Common.read_input()
    |> process()
  end

  defp process(polymer) do
    Enum.reduce(@lower_a..@lower_z, Enum.count(polymer), fn lower_unicode, length ->
      higher_unicode = lower_unicode - Common.get_lower_upper_unicode_difference()
      lower_character = <<lower_unicode::utf8>>
      higher_character = <<higher_unicode::utf8>>

      polymer_length =
        polymer
        |> Enum.filter(fn x -> x != lower_character and x != higher_character end)
        |> Common.process()
        |> Enum.count()

      min(polymer_length, length)
    end)
  end
end
Collapse
 
neilgall profile image
Neil Gall • Edited

Ugh, I hated this one, mainly because I took a completely wrong approach at first in an effort to be very fast. It was fast alright but could I get it to produce the right answer?!

Ali Spittel's stack solution is beautiful.

Collapse
 
mattmorgis profile image
Matt Morgis

Late, but very happy with this one and had a lot of fun

Node.js

const isUpperCase = letter => letter === letter.toUpperCase();
const isLowerCase = letter => letter === letter.toLowerCase();
const lettersAreEqual = (a, b) => a.toUpperCase() === b.toUpperCase();
const last = array => array[array.length - 1];

const unique = array => [
  ...new Map(array.map(s => [s.toLowerCase(), s])).values()
];

const doesReact = (a, b) => {
  let reacts = false;
  if (
    (isLowerCase(a) && isUpperCase(b)) ||
    (isUpperCase(a) && isLowerCase(b))
  ) {
    if (lettersAreEqual(a, b)) {
      reacts = true;
    }
  }
  return reacts;
};

const removePolarity = polymer => {
  polymer = [...polymer];
  const output = [""];

  for (const char of polymer) {
    if (doesReact(char, last(output))) {
      output.pop();
    } else {
      output.push(char);
    }
  }

  // minus one for the emptry string at the start
  return output.length - 1;
};

const bestPolarity = polymer => {
  polymer = [...polymer];
  const uniqueLetters = unique(polymer);
  const results = uniqueLetters.map(letter => {
    const strippedPolymer = polymer.filter(c => !lettersAreEqual(c, letter));
    return removePolarity(strippedPolymer);
  });

  return Math.min.apply(null, results);
};
const fs = require("fs").promises;
const {removePolarity, bestPolarity} = require("./remove-polarity");

const main = async () => {
  try {
    const input = await fs.readFile(__dirname + "/input.txt", {
      encoding: "utf-8"
    });

    const part1 = removePolarity(input);
    console.log({part1});

    const part2 = bestPolarity(input);
    console.log({part2});
  } catch (e) {
    console.log(e.message);
    process.exit(-1);
  }
};

main();

Full code: github.com/MattMorgis/Advent-Of-Co...

Collapse
 
moopet profile image
Ben Sinclair

This is the first one of these I've tried and I'm a lazy person at best, so I did it in Vim:

let @q=':s/aA\|bB\|cC\|dD\|eE\|fF\|gG\|hH\|iI\|jJ\|kK\|lL\|mM\|nN\|oO\|pP\|qQ\|rR\|sS\|tT\|uU\|vV\|wW\|xX\|yY\|zZ\|Aa\|Bb\|Cc\|Dd\|Ee\|Ff\|Gg\|Hh\|Ii\|Jj\|Kk\|Ll\|Mm\|Nn\|Oo\|Pp\|Qq\|Rr\|Ss\|Tt\|Uu\|Vv\|Ww\|Xx\|Yy\|Zz//g
@qq'

There are only 52 combinations after all. Takes about 2 minutes (with nohlsearch...)

Stage 2? Eh, my lunchbreak is really for lunch.

Collapse
 
patricktingen profile image
Patrick Tingen

Wow, thanks for the idea to use a stack. I revised my first solution in OpenEdge 4GL and it went down from 39 second (yes, seconds) to 240 msec. A-ma-zing!