Problems
A curated list of coding problems
F. We Were Both Children
Problem Summary:
We have N<=2e5 frogs with hopping distances up to 1e9. Each frog will go to spots hop, 2 * hop, 3 * hop, etc. Find a spot from 1 to N where we can see the most frogs.
Notes:
Get the frequency of every hop type and use harmonic series.
Solutions:
B. Sherlock and his girlfriend
Problem Summary:
We have N <= 1e5 pieces of jewelry. Color all of them such that no pair (i, j) is the same if i is a prime divisor of j.
Notes:
Any non prime can be one color and any prime can be the other.
Solutions:
204. Count Primes
Problem Summary:
Count the # of primes < N <= 5e6
Notes:
Refer to CSES: Counting Divisors for full notes
Solutions:
Counting Divisors
Problem Summary:
Given a number N <= 1e5 we will get N queries of numbers <= 1e6, report how many factors those numbers have.
Notes:
Naive solution, for a given factor we add contributions, so 1 contributes to 1, 2, 3, ... X. 2 contributes to 2, 4, 6, .... Then to answer queries we output in O(1). It takes X log X time to build the counts and N time to answer queries, N + X log X. Another solution is if we know the smallest prime factor "spf" for all numbers, we can divide a number by its spf repeatedly and use that info to compute # of divisors. If a number has 2^3 * 3^1 * 5^4 it has 4 * 2 * 5 options of factors. So we divide by the SPF and track the exponent, when the spf changes we multiply these together. On average a number M has log log M prime factors (ramanujan) so each query would take log log X time. To build the spf array we can use a prime sieve but mark which number is the spf. This can be done by iterating on a number and then its multiples and marking composites. We only need to mark composites when the number itself was prime. For instance 4 is not prime so we don't iterate on 8 12 16 etc since we know those will be marked. Naively this is N log N with harmonic series but in practice it is N log log N not sure how. We can mark the spf of a number every time we mark it. We do: for divisor in range(1, MAX_X +1) ... check if this divisor is prime (never had an spf set) for mult = 2 * divisor; multi <= MAX_X; mult += divisor mark spf here However we can make some optimizations, see the "sieves.py" guide for notes but a few callouts: -mult can start at divisor**2 not divisor*2, because any number < divisor would have marked any multiples. But this requires some long long casting like for long long mult = 1LL * divisor * divisor. Doesn't change complexity or I think runtime very much. -We can loop divisor up to root(MAX_X) not MAX_X. Remember the goal of the outer loop is to mark larger composite numbers as not prime. But if our factor is > root(MAX_X) then any multiple of it would have a factor less than root(MAX_X) which would already be marked. The only downside is the outer loop won't mark spf[somePrime] = somePrime, for instance MAX_X = 100 we don't ever mark spf[13] = 13, it stays 0. But implicitly anything with an spf=0 is prime. Also if we are just checking a bool array isPrime = [True] * n we aren't worried at all because those isPrimes are already true so stopping the loop on root(MAX_X) is totally fine.
Drill Type:
π» ImplementImplement different versions of this: -Harmonic series -SPF and repeatedly factorize numbers + use combinatorics -DP on divCount and spfPower -Re-read notes and understand different sieves and complexities
Solutions:
666. Path Sum IV
Problem Summary:
Given a tree with a weird structure / format, return the sum of all root<>leaf paths.
Notes:
Use the root=1, left child = 2*i and right child = 2*i + 1 system to handle the weird format. I did dfs(node, prefixSum) as we descend and I used a global result to update the answer when I hit a leaf. We can avoid the global by making the DFS return a value, GPT: "Yep. You can make DFS return βsum of all rootβleaf path sums in this subtree, given that we start at this nodeβ." Like this: def dfs(i, prefixSum): if i not in idxToNode: return 0 val = idxToNode[i] % 10 prefixSum += val left, right = 2*i, 2*i+1 if left not in idxToNode and right not in idxToNode: return prefixSum return dfs(left, prefixSum) + dfs(right, prefixSum) return dfs(1, 0) So I guess the DFS returns the "this node to leave path sums" We can do it without the prefixSum parameter too: GPT: If you really want DFS to not carry a prefix, you can return two things from each node: leafCount: how many leaves are in this subtree sumFromHere: sum of path-sums from this node down to leaves including this nodeβs value Then parent can combine without needing a prefix: def dfs(i): val = idxToNode[i] % 10 left, right = 2*i, 2*i+1 hasLeft = left in idxToNode hasRight = right in idxToNode if not hasLeft and not hasRight: return (1, val) # 1 leaf, sum of paths (just val) leafCnt = 0 childSum = 0 if hasLeft: c, s = dfs(left) leafCnt += c childSum += s if hasRight: c, s = dfs(right) leafCnt += c childSum += s # every path below gets +val once return (leafCnt, childSum + val * leafCnt) leafCnt, total = dfs(1) return total
Drill Type:
π§ MindsolveRe-read the concept for DFS returning a value instead of using a global and understand the implementation Re-read the second solution without the prefix sum parameter too
Solutions:
1303. Find the Team Size
Problem Summary:
Given a SQL table of employee_id and their team_id, output employee_id and the size of the team they are on
Notes:
Good practice with JOIN
Solutions:
1173. Immediate Food Delivery I
Problem Summary:
Given a SQL table of food orders find the % of orders where the order date is the same as the delivery date
Notes:
Good practice all around using CASE, ROUND, understanding COUNT(*)
Solutions:
1082. Sales Analysis I
Problem Summary:
We have a SQL table of sales, select every seller_id that had a max total price of sales.
Notes:
I tried this at first: -- Write your PostgreSQL query statement below SELECT seller_id, SUM(price) AS price_sum FROM Sales GROUP BY seller_id; The issue is this will show every seller<>total sales but I only want to show the top amount(s). GPT showed this: SELECT seller_id FROM Sales GROUP BY seller_id HAVING SUM(price) >= ALL ( SELECT SUM(price) FROM Sales GROUP BY seller_id ); "ALL lets you compare a value against every value returned by a subquery." I asked this: "how can we do having SUM price when we didnt select price in that select though, you only selected the seller id" GPT said this: Because HAVING is allowed to use aggregate expressions that are computed from the rows in each group, even if you donβt output them. In a grouped query, Postgres conceptually does: FROM β get rows GROUP BY seller_id β make buckets compute aggregates per bucket (SUM(price), COUNT(*), etc.) HAVING ... β keep/discard buckets using those aggregates SELECT ... β choose what columns to output for each remaining bucket So this is totally valid: SELECT seller_id FROM Sales GROUP BY seller_id HAVING SUM(price) >= 1000; Youβre filtering groups by their sum, but youβre only displaying the seller_id. Also GPT gave another solution here: WITH s AS ( SELECT seller_id, SUM(price) AS total FROM Sales GROUP BY seller_id ) SELECT seller_id FROM s WHERE total = (SELECT MAX(total) FROM s);
Drill Type:
π» ImplementRead through notes and resolve
Solutions:
1050. Actors and Directors Who Cooperated At Least Three Times
Problem Summary:
We have a SQL database ActorDirector and need to find all actor_id director_id pairs that have occurred >= 3 times
Notes:
We could use a subquery to first get the counts and an outer query to get the rows with count >= 3 (commented out code in github) or we can use HAVING. Remember WHERE filters before the group by and HAVING filters after.
Solutions:
1069. Product Sales Analysis II
Problem Summary:
Given a SQL sales table with product_id, quantity sold, return a table of each product id with how much it sold
Notes:
Standard SELECT + sum aggregator on a group by product
Solutions:
3816. Lexicographically Smallest String After Deleting Duplicate Characters
Problem Summary:
We have a string of letters. We can keep picking 2 of the same letter and delete one. Find the lexicographical minimum string.
Notes:
Hard hard hard. I am stronger tomorrow.
Drill Type:
π» ImplementSolve it yourself
Solutions:
3815. Design Auction System
Problem Summary:
Design an auction system, support users bidding on items for different amounts, updating bids, getting the highest bid and bidder for an item, and removing bids.
Notes:
Can manage this with sorted containers or a lazy heap
Solutions:
3814. Maximum Capacity Within Budget
Problem Summary:
We have machines with costs and capacities. We have a budget. What is the maximum capacity we can get stay under the budget, with up to 2 machines?
Notes:
Many ways to solve: -Prefix maximum + binary search, pick one item and binary search for the most expensive second item in the prefix and take a prefix max -I think we can adapt the above to use two pointers, starting at cheapest and most expensive items and tracking prefixes -Segment tree abuse to find the most in a range, like the binary search solution -Some mono-stack solution I did not read: https://leetcode.com/problems/maximum-capacity-within-budget/solutions/7503763/simple-python-monotonic-stack-by-townizm-38ve/ didn't list seg tree
Solutions:
3813. Vowel-Consonant Score
Problem Summary:
Just a basic string counting question
Solutions:
C. Serval and The Formula
Problem Summary:
We are given X, Y <= 1e9. We need to find a K <= 1e18 where (x + k) + (y + k) = (x + k) ^ (y + k) or show it is impossible.
Notes:
This is saying A + B = A ^ B. This only happens when A & B = 0, no bits can be shared. If A=B we cannot do anything as they will always be the exact same. Otherwise, set the bigger one to some higher power of 2 which will share no bits with numbers below it.
Drill Type:
π§ MindsolveRemember the clean solution and your struggle with trying to greedily iterate from msb or lsb. Remember the simple A + B = A ^ B when no two set bits are shared.
Solutions:
B. Serval and Final MEX
Problem Summary:
We have an array of non-negative numbers of length >= 4. We can select a subarray and replace it with its MEX. Construct a sequence of operations where we get down to an array [0].
Notes:
To get [0] we must have an array with no 0s. I think the easiest way is if the left half of the array has 0s then operate on that to get a non-zero. Repeat on the right half. Now mex the whole thing. We could also arbitrarily mex the array down to n=4 and brute force from there. My solution was a bit more complex, if a boundary was 0 I mex'd it with its neighbor to get a non-zero boundary, for both boundaries. Then if 0 was in the middle I mex'd that. Then mex'd the whole thing.
Solutions:
A. Serval and String Theory
Problem Summary:
We have a string s. We need it to be lexicographically smaller than the reverse of s. In an operation we can swap two letters. Can we achieve this in <= k operations?
Notes:
If s < s' we win immediately. Then if k=0 we lose. If unique letters is 1 we lose immediately. Now we have an operation and two differing letters. If the two differing letters are start and end we can swap them. Otherwise we can swap the differing letter with one of the ends.
Solutions:
Digit Queries
Problem Summary:
Consider the infinite string of numbers 123456789101112... Answer q <= 1000 queries, what is the k <= 1e18 digit?
Notes:
Find how many full numbers of each width we pass, basically implementation practice
Drill Type:
π» ImplementPractice implementation, just free-style solve it
Solutions:
1890. The Latest Login in 2020
Problem Summary:
We have a SQL table with user_id and time_stamp Login times. Return a table for the last login in 2020 for each user.
Notes:
-First we need to select user_id from the table but also the MAX(time_stamp). -We will GROUP BY user_id -We will WHERE the time_stamp is in that date range
Solutions:
586. Customer Placing the Largest Number of Orders
Problem Summary:
We have a SQL table with columns order_number and customer_number. Find the customer_number who placed the most orders.
Notes:
-We can do a query where we GROUP BY customer_number but through in COUNT(*) AS Cnt to get a column with the count of those rows in the group. -Then do another outer query to grab the highest count, sorted by descending with limit 1.
Solutions:
610. Triangle Judgement
Problem Summary:
We have a SQL table Triangle with columns x, y, z that are numbers. Report all rows that can form valid triangles + an extra column Triangle with "Yes" or "No"
Notes:
-Triangle inequality: (x + y > z) and (x + z > y) and (y + z > x) -To add a column we use CASE / WHEN / THEN / ELSE / END AS
Drill Type:
π§ Mindsolveremember triangle inequality
Solutions:
3811. Number of Alternating XOR Partitions
Problem Summary:
Given an array of numbers N <= 1e5 A[i] <= 1e9, and target1 and target2, find how many partitions of the array exist where each block XOR is target1, then target2, then target1, repeating.
Notes:
This is so hard. We will track endA and endB default dicts. endA[x] tells us the # of ways where our last block was target1, and the XOR of the entire prefix is x, as we enumerate on the array. Initially endB[0] = 1. As we add a new element we have some new prefix XOR. We wish a suffix partition to be equal to target1 or target2, meaning we need to look at some prior prefix. If we want a suffix block to be target2 we need to look at the number of ways we previously had an ending block of target1. But not just any previous ending, we use the prefix XOR to establish which ones we can look at. The ending of the solution is tricky too because we return addOne and addTwo specifically for only partitions that ended at n-1.
Drill Type:
π» Implementresolve this hard problem
Solutions:
3812. Minimum Edge Toggles on a Tree
Problem Summary:
We have an undirected tree where every node is 0 or 1, and a desired target list of 0s and 1s for each node. In an operation we can toggle both ends of an edge. Return a minimum # of edge flips or show it is impossible.
Notes:
I thought of a toposort, basically a leaf node must toggle if it needs to be switched. We could do a postorder DFS but toposort made sense to me in the moment (basically the same thing). This was a unique application of a topo sort because it was on an undirected tree so the implementation was a bit different. And a good lesson was to get the topo order first then process it after.
Drill Type:
π» ImplementRead my notes and also re-implement my weird topo sort
Solutions:
3810. Minimum Operations to Reach Target Array
Problem Summary:
We have two arrays of numbers, nums and target (same length). In one operation we can pick all maximal contiguous subsegments of the same type from nums (so if we pick number 5, pick all largest subarrays of only 5s) and make them match with the correct target numbers. Minimum # of operations needed to make nums equal to target?
Notes:
Every number type can be solved independently in one operation. So add all number types that mismatch to a set and return the length of that set.
Solutions:
3809. Best Reachable Tower
Problem Summary:
Towers exist on a 2d grid with [x, y] coordinates and a quality of q. Find the highest quality tower within a radius from our starting position, and in ties take the lexicographical smaller pair.
Notes:
Just linearly enumerate the towers and check
Solutions:
D. Reachability and Tree
Problem Summary:
We have a tree of <= 2e5 nodes. We need to make all edges directed. An ordered pair of nodes (u, v) is good if we can reach v from u. Is it possible to make the edges directed and have exactly N good pairs? Construct a solution if so.
Notes:
Note that by default all N-1 edges add 1 reachable section, so we just need one more. Structures like A>B>C>D add a lot of good pairs (too many) since we can reach things. Our tree must alternate directions, like: A v. v B. C ^^. ^ DE. F To prevent ever having too many good pairs. However we need one more good node than normal, so a single connected chain like A-B-C would become A>B>C, provided B cannot reach any other nodes. It is okay if C has other children and A connects to other things too. D < A > B > C < F v E So we just need to identify a node C by checking if any node has 2 edges. Then make another node next to it like A, the root, and construct the solution.
Drill Type:
π§ MindsolveRemember the idea
Solutions:
C. Coloring Game
Problem Summary:
We have N <= 5000 balls with values <= 1e5. We will pick 3 and color them red. Opponent then picks one (can be a red ball) and repaints it blue. How many triplets can we have a strictly higher score?
Notes:
Try picking 2 balls A and B. Now the third ball C cannot exceed A+B or Bob could recolor C and win. So we have an upper bound. But it must be high enough such that A + B + C > max. So binary search for a range. I had to handle duplicates a bit separately, so I binary searched for a range that would make the C ball always bigger than B (so A B are distinct from C). Then I handled A=B=C separately and A, B=C separately. Two pointers could improve it from n^2 log n to n^2.
Solutions:
B. Shrinking Array
Problem Summary:
We have an array of numbers and in an operation we can replace two adjacent elements with some number between them, inclusive. What is the minimum # of operations to have an adjacent pair with a diff <= 1?
Notes:
If we already have it, return 0. If we are strictly ascending 1 3 5 9 we cannot form it. Otherwise there must be a peak like 152 or a valley 936 and we can take 1 operation.
Solutions:
A. Race
Problem Summary:
We have bob at some point on a number line and two prizes. We can choose where to start at. Can we guaranteed get either prize before bob?
Notes:
Just check if bob is on one side of both points
Solutions:
E. Unpleasant Strings
Problem Summary:
We have a string S of size <= 1e6 and up to Q <= 2e5 other strings (with total length at most 1e6). For each string answer how many more letters we need to append so that it is not a subsequence of S.
Notes:
To quickly check if a string t is a subsequence of s we use nxt[i][char] to only spend O(t) time. Now if we are a subsequence we know where we ended and then we use worst[i][c] to find the furthest right character to jump to. My implementation for worst[i][char] was easier if it corresponds to a suffix, not including `i`. Same with nxt[i][char]. Then I compute an O(26) size first[char] array to figure out where to start. But we also need a dp[i] to stack on top of the worst chains so that a query is O(1) after the O(t) portion.
Drill Type:
π§ MindsolveRefresh mindsolve on this
Solutions:
1037. Valid Boomerang
Problem Summary:
Check if 3 points are distinct and not on the same line
Notes:
We can compare slopes and to avoid fractions, we want to simplify X/Y. This can be simplified while they share a factor, so divide both by their GCD.
Drill Type:
π§ MindsolveMind-solve the slope hashing
Solutions:
1041. Robot Bounded In Circle
Problem Summary:
We have a robot at 0,0 and it has instructions to go up, right, down, and left of various lengths, it loops this forever. Is the robot in a bounded range?
Notes:
Check if the ending position is 0,0 as well, otherwise we drift
Solutions:
64. Minimum Path Sum
Problem Summary:
We are in a grid from the top left and going to the bottom right, down or right only. Minimize path sum.
Notes:
Standard DP
Solutions:
228. Summary Ranges
Problem Summary:
Summarize ranges Input: nums = [0,2,3,4,6,8,9] Output: ["0","2->4","6","8->9"]
Notes:
Just iterate and solve
Solutions:
20. Valid Parentheses
Problem Summary:
See if a bracket sequence with {} [] {} is valid
Notes:
Just use a stack
Solutions:
3653. XOR After Range Multiplication Queries I
Problem Summary:
Support L...R queries where we multiply elements by V, spaced out by K. But small constraint.
Notes:
Just brute force it due to small constraint
Solutions:
793. Preimage Size of Factorial Zeroes Function
Problem Summary:
Call f(num) the number of trailing zeroes on num!. How many numbers X, have f(X) = a given K? K <= 1e9.
Notes:
Binary search on answer, use legendre's
Drill Type:
π§ MindsolveJust remember how to solve this
Solutions:
197. Rising Temperature
Problem Summary:
We have a database with columns ID, date, temperature. Find all rows with a temperature bigger than the previous day.
Notes:
Join the table with itself shifted by 1, select when the temperature is bigger.
Drill Type:
π» ImplementPractice a basic join
Solutions:
196. Delete Duplicate Emails
Problem Summary:
We have a table with column id, and email address. Keep only the smallest id per email address (use a DELETE).
Notes:
Mental model for deletes: MENTAL MODEL: DELETE + USING (PostgreSQL) DELETE removes rows from ONE target table DELETE FROM Person; Meaning: Person is the only table rows can be deleted from SQL scans Person and deletes rows (all rows, since no condition) DELETE ... WHERE ... = delete rows that match a condition DELETE FROM Person WHERE email = 'john@example.com '; Meaning: Iterate over each row in Person If the WHERE condition is true for that row, delete it Only Person rows are deleted DELETE ... USING ... = delete from target table, but allow references to other tables General form: DELETE FROM target_table USING other_tables WHERE condition involving target_table and other_tables; Key idea: Rows are deleted ONLY from target_table USING tables are READ-ONLY helpers USING allows join-like comparisons while deciding which target rows to delete Think: "For each row in target_table, check if there exists a matching row in USING tables that satisfies the WHERE condition"
Drill Type:
π» ImplementPractice delete and review mental model
Solutions:
619. Biggest Single Number
Problem Summary:
We have a table MyNumbers with a column nums. Find the largest number appearing once or return null.
Notes:
First we can select on MyNumbers to remove duplicate numbers with GROUP BY. Then filter with HAVING COUNT(num) = 1; This returns a table of all numbers with frequency 1, we subquery to get the max
Drill Type:
π» ImplementReview notes for how group by relates to select and count from rows β groups, and then aggregates like MIN are defined on groups. Think of it as two phases Phase A: form groups (buckets) GROUP BY email partitions the table into buckets: bucket βjohn@β¦β contains rows with ids {1,3} bucket βbob@β¦β contains rows with ids {2} β¦ At this point you do not yet have output rows. Phase B: produce one output row per bucket Now SQL evaluates the SELECT list once per bucket: email = the bucket label (same for the whole bucket) MIN(id) = take all id values in the bucket and compute the minimum So for johnβs bucket: MIN({1,3}) = 1. Thatβs why itβs βsafeβ: youβve told SQL how to collapse many ids into one value per group. Why βconflictβ isnβt quite right Itβs not that SQL βtries and then conflicts and then fixes it.β Itβs that after grouping, youβre no longer allowed to ask for raw row columns (id) unless theyβre: part of the group key, or computed from the group via an aggregate (MIN, MAX, COUNT, β¦)
Solutions:
1228. Missing Number In Arithmetic Progression
Problem Summary:
We have an arithmetic sequence [A, A + x, A + 2x, ...]. A number (not first or last) was removed, find it
Notes:
I kind of misread I didn't know the array was sorted so I did a counting sort solution Otherwise we can linearly iterate or even binary search on an index and guess what number it should be
Solutions:
Trailing Zeros
Problem Summary:
Find the # of trailing zeroes in N!. N <= 1e9.
Notes:
A trailing 0 is formed by a 2*5. There will be fewer 5! than 2! so how many factors of 5 are there. Amount of #s divisible by 5 we add 1 to result for each. Repeat for 25s, 125s, and so on. log(n) time
Drill Type:
π§ MindsolveRemember legendre's formula
Solutions:
Two Knights
Problem Summary:
Find the # of ways to put 2 knights on an NxN chessboard for all N up to 10,000.
Notes:
# of total ways to place 2 knights is n^2 choose 2. For rectangles: X X X X X X X X X X X X We lose 2 copies for each, which we can calculate in O(1).
Solutions:
3701. Compute Alternating Sum
Problem Summary:
Sum even indices minus odd indices
Notes:
1 liner
Solutions:
3336. Find the Number of Subsequences With Equal GCD
Problem Summary:
We have N <= 200 A[i] <= 200, find the # of ways to select two subsequences (don't have to take all numbers) with equal GCDs
Notes:
dp(i, gcd1, gcd2) and we pick or not pick. Can precompute all GCDs.
Solutions:
3247. Number of Subsequences with Odd Sum
Problem Summary:
Find the # of subsequences with an odd sum
Notes:
Basic dp(i, parity) or bottom up
Solutions:
C. Divine Tree
Problem Summary:
We have N <= 1e5 nodes from 1...N. We are given M <= 1e12. The divine-ness of a node is the minimum value from that node to the root of a tree. Can we form a tree with divine-ness M? If so return the edges.
Notes:
First note a tree with root 1 has the minimum divine-ness which is N, so below that is impossible. A max divine tree would be like 7->6->5->4... which is n * (n + 1) / 2, so we cannot make more than that. Start with a min organization: 1>2>3>4>5 To increase by 1 we can rotate: 2>1>3>4>5 Again: 2>3>1>4>5 Eventually 1 gets put on the back: 2>3>4>5>1 We can do it again with 2, eventually reaching: 3>4>5>2>1 I simulated with a deque. We can also think about a baseline we score 1 per node, so we have P = m - n remaining to score. Any time we put a number X before 1, we gain X-1 extra (if we put them in descending order, like 6 5 4 1 we gain 5 + 4 + 3). So we can take the biggest numbers that sum to our remaining target and put everything else after the 1.
Drill Type:
π» ImplementImplement wice's solution
Solutions:
B. Square Pool
Problem Summary:
We have a huge S x S coordinate pool table and balls at integer positions that are hit at 45 degree angles. They bounce. How many balls eventually go into a hole?
Notes:
It is the reflection vs. bouncing trick, count how many are on each diagonal those will go in
Solutions:
A. Square of Rectangles
Problem Summary:
We get 3 rectangles where the sides of r1 are >= sides of r2 are >= sides of r3, can we place them adjacent to form a square?
Notes:
I did a ton of if/else branching, and a small rotation trick at the end
Solutions:
D. Missing Subsequence Sum
Problem Summary:
We are given K <= N <= 1e6. Construct a sequence of at most length 25, where no subsequence adds to K, but we can create subsequence sums for all other 1...N.
Notes:
A standard summing problem would give 1 2 4 8 ... until our total sum is >= N. First construct up to K-1: 1 2 4 8 16 ... then some number that puts us at K-1. Now we can make 1 to K-1. We need K+1, the only number that can be placed is K+1. Now we can make 1 to K-1 and K+1 to 2K. We need 2K+1 so we place that. We might think we can place all numbers up to 4K+1 now, but we cannot make 3K+1 as it is a hole. We place 3K+1 and we might think we can make up to 7K + 2 but we cannot make 6K + 2. I kept doubling the hole and placing that and it worked but also needed some edge case handling. Wasn't the cleanest solution.
Drill Type:
π§ MindsolveStay vaguely aware of how I solved this
Solutions:
C. Everything Nim
Problem Summary:
We have N piles of stones, A[i] <= 1e9, N <= 2e5. We can remove a number of stones from every pile at once, as long as all piles have that many stones. Last to remove wins. Who wins?
Notes:
Observe repeated piles do nothing: [1, 3, 3, 5] is equivalent to [1, 3, 5]. If we have 2 stones in our pile we can both deplete it fully, or force our opponent to deplete it, so we would be able to win no matter what. If we don't, we see what the opponents situation is. So iterate until we get a pile where we have at least 2, counting for previous subtractions.
Solutions:
B. Rectangle Filling
Problem Summary:
We have a 500x500 grid of black and white tiles. We can select two diagonal corners if they are the same color and flip the entire region to that color. Can we white or blackout the grid?
Notes:
If two diagonal corners are same color, we can do it immediately. Otherwise we have something like: W W B B (Or rotated 90 degrees) To white out those bottom black corners we would need a white in between them. So just check in between the edges on each side.
Solutions:
A. Card Exchange
Problem Summary:
We have N cards with numbers on them. We can exchange K of the same type for K-1 of any number. Min # of cards we can form?
Notes:
If we have K of a card, we can group it with any other number forming a cascade effect. Otherwise we can do nothing.
Solutions:
E2. Erase and Extend (Hard Version)
Problem Summary:
We have a string s of size <= 5e5. We need to form a string of size k <= 5e5. We can delete the last letter of s, or do s = s + s. After infinite operations, find the smallest lexicographical string we can form.
Notes:
First read the E1 solution. Then here is the greedy idea. There may also be other solutions like z-function that can be used but I'm just listing this greedy one / putting it in string algorithms. Don't really consider this constructive. We will enumerate prefixes greedily Say we have some prefix "db" and are considering if "dbc" repeated is better That c will compare to the start of our prefix, d, so it is better If we were at "db" and considering "f", clearly "db" + "d" is better than "dbf" and we fail if the letter is equal, like "dbca" considering "d" (in the test "dbcadabc") we don't know which sequence is better yet We store "dbca" as the best still but accept the new "d" as an option and keep going If we keep going a lot, say our initial prefix |AB| CDEFGHHIJK like everything after C is being marked as equal We don't need to mod our "indexToConsider" back to the beginning, I think because it is guaranteed the CDEFGH... portion is just ABABAB repeated
Drill Type:
π» ImplementSolve this
Solutions:
D. Deleting Divisors
Problem Summary:
We have a number N <= 1e9. On our turn we can subtract a divisor not 1 or N. If we cannot, we lose. If two players are playing optimally who wins?
Notes:
Some observations: If we are prime, we lose. If we can force the opponent into a prime, we win. If we subtract a number X, then new remainder is also divisible by X. Then the only way to get someone into a prime is to halve our number into a prime. If we subtracted some prime number divisor we can only get new numbers that are multiples of that prime, and to get a new number that is exactly that prime we halved it. We cannot get some other prime number (like 25 - some prime 5 lets say, could never produce a prime that isn't 5 (in this case it doesn't even make 5, it makes 20)), because that new number would have to have the factor we just subtracted. I made some of these observations but could not close the logic loop so I printed answers up to 1000 and noticed a pattern. Proper solution of casework: If we are odd, all divisors are odd, so we must subtract an odd. This means we will obtain a new even number, but it cannot be an even power of 2, since a power of 2 cannot have an odd factor. If we are even and not a power of 2, we must have an odd factor. If we subtract the odd divisor we will win. If we subtract an odd we give our opponent an odd which could be a prime (they lose) or an odd number where they must give us an even number that is not a power of 2. If n is even and a power of 2 we can halve it or make it an even number that is not a power of 2 but we proved that wins for the other player. So we have it. So we win on all evens except 2, 8, 32, etc
Solutions:
E1. Erase and Extend (Easy Version)
Problem Summary:
We have a string s of size <= 5000. We need to form a string of size k <= 5000. We can delete the last letter of s, or do s = s + s. After infinite operations, find the smallest lexicographical string we can form.
Notes:
My claim is that we should delete some amount of letters, duplicate until we exceed size K, then trim the remaining edges. If we were to remove some letters, duplicate, and remove more, then duplicate again I think it would be provably worse, need to think about it more requires some focus. But Jiangly pointed out a good equivalence, s1^infinity < s2^infinity means s1 + s2 < s2 + s1: https://codeforces.com/blog/entry/91381?#comment-805703 Consider ABCD (don't worry about the actual letters for now), if we cut off D then duplicated: ABCABC And if we claimed we wanted to cut off more and duplicate again: ABCAB ABCAB It's sort of like claiming AB is better than ABC we should have just cut off more to begin with. So our string s with some ends chopped off, plus more of the start of s, is better as an infinite sequence if that is less than s itself. With jiangly's observation we can just test all prefix lengths duplicated and see which is the smallest. Since constraints are small we can do n*k.
Drill Type:
π§ MindsolveRemember the string equivalence
Solutions:
C. Challenging Cliffs
Problem Summary:
We have an array A[i] <= 1e9, N <= 2e5 and adjacent numbers that are monotonically increasing add 1 difficulty. So [1, 1] or [2, 5]. We need to arrange the array such that abs(A[0] - A[-1]) is minimized, what is the max difficulty we can make?
Notes:
Great question, visualize the array as a monotonically increasing line chart. We find two heights that have a minimum differences like 3->4 and make the right portion the start, so 4..., and the left portion the end, so ...3. This makes only one point in the chart drop and we only lose 1 difficulty. If the length of the array is 2 though, we just return the sorted pair.
Solutions:
B. Bad Boy
Problem Summary:
We are in a 1e9 x 1e9 grid. We can pick two cells to move to orthogonally and then back to our start. Which two to maximize distance?
Notes:
I enumerated all pairs of corners to visit but I think we can make an observation that any opposite pair of corners is optimal.
Solutions:
A. Arithmetic Array
Problem Summary:
Given an array of integers (can be negative), find the minimum # of non-negative numbers to add to make the mean 1.
Notes:
Casework: If 1 already we are done If less than 1 we can always add a single number Otherwise add 0 sum-length times.
Solutions:
36. Valid Sudoku
Problem Summary:
Check if a 9x9 partial filled sudoku board is valid (not necessarily solveable),
Notes:
Scan over rows, columns, and boxes. Use box hashing const rowGrouping = Math.floor(rowNumber / 3); // rows 0-3 -> 0, 4-6 -> 1, 7-9 -> 2 const colGrouping = Math.floor(colNumber / 3); // cols 0-3 -> 0, 4-6 -> 1, 7-9 -> 2 // if we are in rowGrouping 0, we know we are in boxes 0, 1 or 2 // if we are in rowGrouping 1, we know we are in boxes 3, 4, or 5, etc // our first possible box we could be in, based off of the rowGrouping, is (rowGrouping * 3) // the colGrouping then provides an offset to increment by const boxNumber = rowGrouping * 3 + colGrouping;
Drill Type:
π§ Mindsolverefresh box hashing (for interviews)
Solutions:
14. Longest Common Prefix
Problem Summary:
Find the longest common prefix among N strings (sum of lengths is small)
Notes:
It's just brute force check one letter of each string at a time, optimal complexity
Solutions:
9. Palindrome Number
Problem Summary:
Return true if the number X is a palindrome
Notes:
To do with just math we can "pop" the last digit by mod 10 and build out a reversed number, then compare those 2
Drill Type:
π§ MindsolveMindsolve the math reversal technique
Solutions:
1. Two Sum
Problem Summary:
Find the indices of two numbers that add to X
Notes:
hashmap
Solutions:
3576. Transform Array to All Equal Elements
Problem Summary:
We have an array of -1 and 1. We can perform an operation to flip two adjacent numbers. If we can make at most K operations, can we make the entire array equal?
Notes:
Just greedily attempt both all positive and all negative
Solutions:
3584. Maximum Product of First and Last Elements of a Subsequence
Problem Summary:
Find the maximum product of the first and last elements of any subsequence of nums, subsequence size m. N M = 1e5, A[i] between -1e5 and 1e5
Notes:
We can do postfix mins and maxes and scan, and check a distance of M away
Solutions:
3583. Count Special Triplets
Problem Summary:
Find the # of indices triplets where the middle is half of the outer edges.
Notes:
Use a prefix and suffix and scan for O(n)
Solutions:
3582. Generate Tag for Video Caption
Notes:
Basic string manipulation
Solutions:
3581. Count Odd Letters from Number
Notes:
Basic counting question
Solutions:
3659. Partition Array Into K-Distinct Groups
Problem Summary:
We have numbers, N, A[i] <= 1e5, can we split them into groups of size K where every element in each group is distinct?
Notes:
First, make sure the most frequent number occurs <= groups times (otherwise it is impossible) and also make sure n % k == 0. Imagine the frequency of our numbers is like a histogram, taller bar means a more frequent element. O OOO OOOO Frequencies are [1, 2, 2, 3]. Say we are trying to make 4 groups of size 2. The slots will look like this. X|X|X|X X|X|X|X Simply lay the most frequent element (3) into three groups: X|X|X|X O|O|O|X Now lay the next most frequent (2). Here I use lowercase o to show the second layering. o|X|X|X O|O|O|o And repeat. Of course we can fill out all the slots, because no frequency is >4.
Drill Type:
π§ MindsolveMindsolve and review the notes
Solutions:
3658. GCD of Odd and Even Sums
Problem Summary:
Find the GCD of the sum of the first half and sum of second half of odd and even numbers.
Notes:
Could even use math to get the sums from 1...n and get their GCD, O(1)
Solutions:
Planets Queries I
Problem Summary:
You have N planets each has a teleporter to some other planet. We have Q queries where we teleport up to K times. Which planet do you end up on? N,Q <= 2e5. K <= 1e9.
Notes:
Binary lifting
Drill Type:
π» ImplementImplement a solution
Solutions:
1700. Number of Students Unable to Eat Lunch
Problem Summary:
A cafe serves 2 types of food. Students stand in a queue and each wants a specific type. Foods are placed in a stack and students either take from the top of the stack or loop to the back of the line. How many students cannot eat?
Notes:
We can simulate which is slow but the constraint is small, but with higher constraints there is probably a better way I didn't look. Not really a queue / stack problem also (at least the easy solution)
Solutions:
1701. Average Waiting Time
Problem Summary:
A chef cooks for customers arriving at different [L, R] times, he cooks in arrival order, what is the average waiting time?
Notes:
We can just simulate and count it
Solutions:
1704. Determine if String Halves Are Alike
Problem Summary:
Does the first half and second half have the same number of vowels
Solutions:
1706. Where Will the Ball Fall
Problem Summary:
We have a 2d grid and we drop balls, it's like plinko sort of (see picture). For each starting location at the top where does it fall.
Notes:
I used dp(row, column, quadrant in square, moves at this row). Moves at this row prevents us from cycling back and forth forever.
Drill Type:
π» Implementsolve it
Solutions:
1708. Largest Subarray Length K
Problem Summary:
An array "A" is considered "larger" than another if the first index they differ at, "A" has a bigger element. Find the largest subarray of length K for an array where every number is distinct.
Notes:
Scan left to right making room for the full subarray of length K, keep checking the biggest number as the start and use that
Solutions:
Two Sets
Problem Summary:
Given a number n, we have 1, 2, 3, ... n. Partition into 2 sets of equal sum and print them, or determine not possible
Notes:
Keep taking from the left until we overflow, then subtract We are guaranteed to be able to make all sums of 1 using just 1 So when we add a 2, we can make 2, and all previous sums added (up to 3) Now when we add 3, we can add it to all previous ranges, so up to 6, and so on Eventually we will overflow the target by some amount < the number we just added at the end, so we can remove that single number
Drill Type:
π§ MindsolveMindsolve this
Solutions:
C2. PokΓ©mon Army (hard version)
Problem Summary:
Given an array of numbers, A[i] <= 1e5, N <= 3e5, and Q queries, Q <= 3e5, support point updates and find the largest alternating subsequence sum A + B - C + ... The input array is a permutation.
Notes:
A segment tree where each node stores the largest sum where the first and last elements were added, added and subtracted, subtracted and added, and both subtracted. Note the subtracted then subtracted base case for a leaf node is -1 * value, not NINF. There is another perceptive solution. Given the input is a permutation all numbers are local maximums or local minimum (bigger or smaller than both neighbors, ends of the array count as -inf). The length of our sequence should logically be odd. Odd index should always be local maximums, if not, it implies there is an adjacent neighbor bigger, we should take that instead. If we had an odd index not be a local max we could basically take its adjacent neighbor instead as an odd index and gain more. If that neighbor were an even index we should delete that and our current odd index to not lose to our contribution (need to think about this more). But essentially all local maxes and local mins should be taken. (also need to think about this). And to support swaps we can do bookkeeping on this.
Drill Type:
π§ MindsolveRemember this application of segment tree, also stay vaguely aware of the local maximum and minimum idea
Solutions:
D. Rescue Nibel!
Problem Summary:
We have N ranges [L, R]. N <= 3e5 R <= 1e9. We need to select K ranges that all overlap, how many ways can we do this? K <= 3e5.
Notes:
First note trying to pick a range and see how many ranges touch that, doesn't work, as if two ranges touch our selected range, that doesn't mean those 2 overlap. Instead we go by coordinate point, starting left to right. We can pick some amount of ranges that already started, but need at least 1 new range starting here to avoid double counting. We can try taking 2 ranges starting here, 3, and so on, as it amortizes. Combinatorics. We pop from a heap or multiset on the left as well as we slide, for ranges that already ended.
Drill Type:
π§ MindsolveMindsolve this
Solutions:
C1. PokΓ©mon Army (easy version)
Problem Summary:
Find the largest alternating sum subsequence (A - B + C - D ...)
Notes:
Standard DP I used bottom up
Solutions:
B. Rock and Lever
Problem Summary:
Find the # of pairs of numbers where the bitwise AND is >= their bitwise XOR.
Notes:
Two numbers have a max set bit. If they are the same, the AND is bigger, if they are different, the XOR is bigger. Store these counts in a frequency table and update as you iterate.
Solutions:
A. Cubes Sorting
Problem Summary:
We have an array and need to make it mono-increasing, in an operation we can swap two adjacent elements, can we make the min # of swaps < n*(n-1)/2?
Notes:
First note every element needs to swap past the # of elements bigger on its left. We could compute this with a variety of things (I used a cursed dynamic segment tree lol), but also note worst case is exactly n*(n-1)/2 in a decreasing array so we can just check that condition. Also the # of elements bigger on the left CANNOT be counted with a monostack. Not listing dyn seg tree or coordinate compression since they are so out there.
Solutions:
B. Interesting Array
Problem Summary:
We have to construct an array of size N that adheres to M requirements. Each requirement is the bitwise AND of arr[l:r] = some given and value. Return the array or impossible.
Notes:
For every element in a query range all the AND bits need to be set at a minimum, so set those. Note setting more bits cannot help do anything since it wouldn't help meet more requirements, so that is not required. We can then check the feasibility with a sparse table on the ANDs. To apply the AND bits I used a sweep line of size LOG * n. Could do lazy seg to set ANDs and normal SEG to query AND too.
Solutions:
Ferris Wheel
Problem Summary:
We have people with different weights, a gondola has a max weight and can seat 1 or 2 people, how many do we need?
Notes:
Sort and two pointers from the end. Always try to pair highest weight with lowest, don't need to try to pair with a bigger low, because we aren't "saving room" for another high weight as we are using the highest.
Solutions:
Apartments
Problem Summary:
We have apartments of different sizes and people who want to live in apartments of given sizes, but accept a threshold difference of K, how many people get apartments?
Notes:
Sort both arrays and greedily assign with two pointers
Solutions:
Movie Festival II
Problem Summary:
We have movies with [L, R] times and K people. How many can we watch?
Notes:
First thought might be store a heap of R (ending times of people currently watching movies), sort movies by R, and when we get to one, pop all that are <= L. See this test: k=2 (1, 50) (1, 60) (100, 150) (40, 200) We would assign 2 people to 50 and 60, pop both for 150, but then 200 we cannot actually attend. Sorting by L doesn't work either because we cannot determine if this L is worth watching or if we should save it. Instead imagine K lanes of people, we sort by R to greedily consume early and assign the range to the latest ending time that is <= L. I sorted by R and if there is a tie on R it doesn't matter, because we will assign to the largest ending that is free, so any order works. I don't think another sort works (like if tie R, always put the leftmost L first and then assign that interval to the smallest last, instead of the biggest last that is <= L, because a further bigger R+1 or something might demand a small L that we filled up).
Drill Type:
π» Implementremember the idea and read notes and implement
Solutions:
Traffic Lights
Problem Summary:
We have a road from 0...x and every second we turn on a light at one point in the road, what is the longest passage of unlit area after each point? Note the lights operate in ranges, so if we have [0, 1, 2, 3] and we turn on 0 and 2, 0-2 is still a valid unlit passage.
Notes:
First we can use a sparse seg tree with point lights and range "longest contiguous range of 0s" by storing pref, suff, max, width inside the lazy nodes. A lot of lessons with lazy node practice such as create two children every time such as in pointUpdate, how to use pointers safely, and don't store nodes in a vector it is so easy to make bugs. Also in the pointUpdate function for a sparse seg tree as we descend and create nodes, we can create random dummy values see this python solution (TLE): https://cses.fi/problemset/result/15936995/ because as we recurse down those nodes will pull anyway. But the child we don't recurse to in a point update has to be made correct since we will pull on it. With a vector of nodes and each node has indices pointers it was so hard, pushing to a vector and doing Node& node = tree[i] causes issues because the tree vector resizes and tree[i] diverges from the node. Inside a pull of sparse seg tree since we already have the node we should update the values of the node directly to keep parent pointers that point down to us. Okay now for the query this is one of those weird point vs range index tricks. If we have 0 1 2 3 (4) 5 (6) 7 where () means it is lit, our longest path is 4 from 0-4. Even if 4 is lit. There's some edge cases with boundaries being lit and whatnot too. But my point is we cannot just take the longest range of unset points and add 1 to get the longest range. Like the 5 area has an unlit passage of 2 because its boundaries are lit, but the 7 has an unlit passage of 1 since it is at the edge. Easiest implementation is to light 0 and x at the start and everything works. I also did a multiset interval tree style solution but instead of storing ranges we just store the cuts, it makes the implementation easy and is good practice. Also I think we could do a dense segment tree with compression but it seems difficult to manage the widths of things.
Drill Type:
π» ImplementImplement a sparse seg tree (tricky thing around the pointRecurse function), use pointers in C++ for the seg tree, and remember the point versus span weirdness in this question, also implement the interval tree inverted
Solutions:
Collecting Numbers II
Problem Summary:
We have a permutation and will scan left to right, picking up numbers in order. We have M swaps where we swap position A with B. After each swap, report the # of scans needed to collect all numbers.
Notes:
Any time a number is after its next number, so [6, 1, 5] 6->5 is inverted we would require another scan. We can track this with bookkeeping. Remember to make each query [L, R] ordered as I had a bug.
Solutions:
Tree Matching
Problem Summary:
A matching of edges is one where each node touches at most 1 edge. Find the max # of edges in a matching.
Notes:
Use greedy, dfs returns (edgesUsed, isRootTouchingEdge). We can connect to a child if any child wasn't part of an edge. Watch for tricks to check a leaf in C++ using dfs(node, parent), we cannot just check g[node].size() == 1 since node could be the root. Also a base case where we have one child is not sufficient since that child could have more leaves.
Solutions:
Tree Distances I
Problem Summary:
find the max distance to another node for all nodes
Notes:
Tree rerooting DP. First compute the answer for a max dist below for all nodes, when rooted at 0. Next we reroot. Iterate over adjacent children to solve for them. The max distance for a child is either its down distance (computed in dfs 1), 1 + max distance going up from our node (we will maintain an up array we solve for in dfs2), or 1 + max distance going down, from our node, but not going through this child. To compute that we need the best 2 downward paths (computed in step 1).
Drill Type:
π» ImplementSolve with reroot
Solutions:
Tree Distances II
Problem Summary:
for each node in a tree, find the sum of distances to all other nodes
Notes:
First compute the size of a subtree for all nodes (root at 0) and then we can get the distance to all nodes below us too. The root 0 would then be the distance to all other nodes. To transition we run a second dfs, iterate over children of a node. The child answer is the parents answer but all nodes in the child subtree moved 1 closer and all other nodes 1 further.
Drill Type:
π» ImplementImplement reroot dp
Solutions:
Hotel Queries
Problem Summary:
support point update and query leftmost >= X
Notes:
use a max seg tree and walk, could use sqrt decomp technically
Drill Type:
π» ImplementImplement the seg tree walk
Solutions:
Distinct Values Queries II
Problem Summary:
Support point updates and range queries [L, R] "are all numbers distinct"
Notes:
We can keep a next[i] array which points to the next occurence of that number or N if no further ones. Build a range min seg tree on this. If the min is > R, all distinct. To support updates we need sorted containers of indices to do bookkeeping. We could also compute # distinct with a merge sort tree on the nxt pointers, find how many values are >R in a range. I think fractional cascading could speed up the merge sort tree when using vectors, and I think order statistic containers could allow updates but not sure... Tin said there is a 2d structure like a fenwick tree on a persistent segment tree not even going to tag this. He said you can range set and count distinct too.
Drill Type:
π» ImplementImplement this nightmare
Solutions:
Distinct Values Queries
Problem Summary:
Support offline [L, R] distinct # of values queries
Notes:
Mo's to sort queries. A hashmap to track frequencies as we slide TLEs due to constant factor inside the hot path. Instead, we compress values to positions and store them in an array, compressed[i] is just the compressed index of arr[i]. We maintain an active count field and increment or decrement as we hit 0 or 1 frequencies. This constant optimization passes. Also when we do Mo's we should expand the boundaries both first, otherwise l could pass r and we start subtracting elements from the range. This would be un-done when we expand r, but bookkeeping (for instance checking things have frequency 0 and deleting them from a hashmap) might get bugged. We can also support online queries in logN with a persistent segment tree. We could also get num distinct with a merge sort tree on nxt[i] pointers and search how many values are >R. I think fractional cascading can speed this up. I think we could even support updates if using order statistic containers. We could also do offline scanning weirdness: Make a first[i] array which is 1 if it is the first occurrence of an number, otherwise 0. Sort queries by L. As we increment L we lose numbers out the left, we need to update inside our window any first occurrence of a number inside that window (can be done with nxt array bookkeeping). To get the # of first elements in our window use fenwick tree diffing to query sum l...r
Drill Type:
π» ImplementJust implement mo's smoothly along with the other needed optimizations
Solutions:
3762. Minimum Operations to Equalize Subarrays
Problem Summary:
Support range queries [L, R]. Given an operation where we can change A[i] by +-k, find minimum # of operations in a range to make every number equal, or -1 if not possible.
Notes:
If all numbers in a range do not have the same remainder mod K, it is not doable. We could make a remainder array and compare the max and min with a sparse table, or many other solutions (Mo's + max/min using two-stacks trick or monodeque, Mo's + hashmap of unique values, # of distinct elements query online or offline on a remainder array, etc). Now we need to find the median and sum of elements <= median and sum >= median and set everything to the median. First, we can use sliding window multisets with Mo's but it is O(n * rootN * logN) and TLEs because of overhead. We can use Mo's + fenwick/seg tree which is O(n * rootN * logN) but it passed with some optimizations. Here are some notes: -Fenwick or seg tree can store values not indices, and we update and remove values as we slide. We need to find the k-th element (can store counts in a node and do a tree walk) to find the median. And the sum of the first k elements if we store sums in nodes. We could also support queries like sum of numbers from [lowerBound, higherBound] by diffing prefixes or using a seg tree. Also we need to coordinate compress we can compress all array values and then use binary search on query values to find the nearest compressed array value, or compress both. Basically a few ways to do things. To AC I used optimizations: -Hilberts -Fenwicks instead of seg tree -Find the k-th element (median) and then sum of the first K, then use O(1) to get the top half sum and count so we only do 2 log operations per Mo. We can also use online range query for median and online range query sum <= median using persistent seg tree for n log n. Tin has some sqrt decomp code that seems to support point updates, range median queries, and range sum queries for l...r in a value range x <= ? <= y too, but not sure how this works: https://leetcode.com/problems/minimum-operations-to-equalize-subarrays/submissions/1844480184/ I bet we can do a merge sort tree with prefix sums too or something crazy.
Drill Type:
π» ImplementImplement something and also read through notes and different ideas
Solutions:
F. Range Update Point Query
Problem Summary:
Support range update "replace every number with the sum of its digits" and point query
Notes:
I did it with a lazy seg tree where we have a lazy amount of operations applied. To point query we can reconcile the lazy updates by applying them, breaking early if our number ends < 10. I think we can also do a normal seg tree where the range update is A[L] += 1 and A[R+1] -= 1 and we query for the total in a prefix, since we do point queries. Also since a given number can only be updated 3 times we could just track living numbers in sorted containers. Probably other solutions.
Solutions:
Coin Piles
Problem Summary:
we have two piles of coins, we can remove [1, 2] or [2, 1] from them, can we empty them at the same time?
Notes:
if the total is not divisible by 3, no. If the max is > 2 * min, no. Otherwise, we can equalize their sums. And then take away [3, 3] in sets of 2 moves.
Solutions:
Bit Strings
Problem Summary:
compute 2^n % MOD
Solutions:
Tree Diameter
Problem Summary:
Find the max diameter of a tree
Notes:
Rooted at 0, a max diameter must pass through some LCA, so for each node do a tree convolution on the longest chains to update the result
Solutions:
Minimal Grid Path
Problem Summary:
Given a grid of letters, find the lexicographical smallest string to go from top left to bottom right, only moving down or right.
Notes:
DP is n^3 because we need to compare strings at every cell. There is a cursed way to get n^2 log N. For instance say we are computing the smallest string to reach r,c, it is either r-1,c + newLetter or r,c-1 + newLetter. We need to know which of those is shorter. Instead of holding the entire smallest string for those, we could set up dynamic binary lifting on a nodes best choice, so maybe r-1,c had a best choice to go to r-2,c. We need to assess the prefix hashes of some path going from 0,0 to r-1,c and 0,0 to r,c-1 to find the longest common prefix to be able to find which letter they differ at, along the optimal paths, to find which path was lexicographically shorter. I think binary search + binary lifting gives n^2 log^2n but there should be a way to get just n^2 log n according to chatgpt. The better way is just a frontier. We store a list of the current frontier positions and check all down and right positions, keeping only a new frontier with the minimum letter. n^2.
Solutions:
Palindrome Queries
Problem Summary:
support point character updates and range is palindrome queries
Notes:
Good practice for understanding forward hash and rev hash for s[l:r] substrings In the seg tree we need to shifts hashes by a certain amount when merging nodes. In the query function we are essentially merging new nodes we create on the fly. If we pass ql and qr down to queries, things will get messed up with the widths. A child seg tree node with range [0, 4] for a query [2, 7] will merge two children, but when we merge children we cannot do say qr - (tm + 1) + 1 for the right width because the right merge is probably like [3, 4] or something and has width 2. We can either cap the widths as pictured below (no need to do it in the recursive calls), or just store a width on the Node struct and merge that (easier) Node _query(int i, int tl, int tr, int ql, int qr) { // if we are fully in range, return our node if (ql <= tl && qr >= tr) { return tree[i]; } int tm = (tl + tr) / 2; // If only the left is in range return that if (qr <= tm) { return _query(2 * i, tl, tm, ql, qr); } // If only the right is in range return that if (ql >= tm + 1) { return _query(2 * i + 1, tm + 1, tr, ql, qr); } // we need to merge both Node left = _query(2 * i, tl, tm, ql, qr); Node right = _query(2 * i + 1, tm + 1, tr, ql, qr); int leftWidth = min(qr, tm) - max(ql, tl) + 1; int rightWidth = min(qr, tr) - max(ql, tm + 1) + 1; return merge(left, right, leftWidth, rightWidth); }
Solutions:
Subordinates
Problem Summary:
print the size of all subtrees
Solutions:
Forest Queries
Problem Summary:
Support 2d range sum queries
Notes:
Standard 2d prefix / suffix
Solutions:
Range Updates and Sums
Problem Summary:
Support range assign, range add, and range sum queries
Notes:
Not listing sqrt decomp. This is a lesson in how to cleanly implement a lazy segment tree. Some invariants: 1/ If a node has a lazy tag, it implies it has already been applied to this node. Why? Supposedly makes it a bit easier to code and tin_le once had a bug without doing it this way (though I think you can do it where the tag means we still need to apply it). 2/ We should push down tags in everything (queries, updates) instead of just queries. I believe in theory we only would need to push down tags in queries for some problems, like support range add and range sum. For instance if we range add to the entire tree we put a tag at root then if we range add to a leaf we could put a tag at the leaf (without pushing down the root) then when we query we push and aggregate. But I think for more complex problems like this one we might need to push updates every function (not sure) but either way it feels easier and more logical to do so. 3/ Push is really: apply this tag to children, then update their lazy tags by composing. "pushDownLaziesAndClearCurrentLazies" 4/ Pull is just update the current node based on freshly updated children
Solutions:
F. Cherry Tree
Problem Summary:
The leaves of a tree each have a cherry. We can shake a subtree to remove all cherries in it, but if you shake a leaf that already dropped the cherry the tree broke. Can we clear all cherries with a # of shakes divisible by 3?
Notes:
Standard tree convolution where we look at children and merge them together, however the implementation is a bit more unintuitive here. For instance we normally have a beforeChild and afterChild state and we initialize beforeChild to an identity value, but the identity in this problem was confusing: [True, False, False] meaning we can clear all cherries in this tree with a remainder of 0 shakes. This is vacuously true. Because when we consider just the root node as the tree (before any children) we don't have any leaves inside yet and thus can clear this empty tree. To aggregate, we need to clear the new LHS + RHS and can update newDp[0] [1] and [2]. I ended up implementing it with no identity value and starting my beforeChild dp on the first node, and looping through the second and further node. To merge in the children we do need to update newDp[1] and this isn't considering the chance we can shake the entire subtree at the root. That is more of an edge case that can be done after and we always update newDp[1] at the very end. There is also a second editorial solution I didn't understand
Drill Type:
π» ImplementI should understand the idea of the identity value in tree convolution and what the exact meaning of the states for before and after child mean for this problem, and how shaking the entire tree is unrelated to the convolution
Solutions:
Hidden Integer
Problem Summary:
basic interactive problem that is binary search
Solutions:
Visible Buildings Queries
Problem Summary:
We have buildings with heights, standing from the left we can see only a certain amount due to their heights. Given Q queries [a, b], if only buildings in [a:b] existed, how many buildings could we see?
Notes:
First for each index compute the next greater index. I pushed MAX_HEIGHT+1 to the end to make it easy. Then build binary lifting where jump[power][index] tells us the next index we can go to, or jump[power][sentinel] points to itself to make us able to index into the jump tables easily without if/else branching. Now we can answer queries.
Solutions:
Longest Common Subsequence
Problem Summary:
Find the length of the LCS and reconstruct it
Notes:
Should practice: Bottom up pull dp + recon Bottom up push dp + recon Top down dp + recon A lot of lessons here: Say we build dp[i][j] to represent the LCS for ...i-1 ...j-1. Then the answer is dp[n][m]. I did this with push dp, but I could do dp[i][j] represents LCS for ...i ...j with pull DP. Note we can dp push / pull dp with both representations of ...i ...j or ...i-1 ...j-1. We may need if/else branching to handle out of bounds in different cases. Once we finish the DP, we need to reconstruct. We could try to walk forwards from 0,0, but it will not work. Visualize this: file:///Users/ishaan/Desktop/Screenshot%202026-01-12%20at%2012.42.09%E2%80%AFAM.png If A[i] and B[j] match sure we should always walk that diagonal to try to construct the path covering the most diagonals, but if they do not match, we don't know which way to turn. Analyzing dp[i+1][j] and dp[i][j+1] tells us nothing because that is information about PREFIXES, the most diagonals we can get reaching that cell. We need suffix info. If we build our dp to be suffixes i... j... we could "walk forwards" but really that is walking backwards in reverse. So instead, we start at the bottom right and if we can go up-left to a diagonal we always should. Otherwise, we will pick the best prefix result. This is essentially how top down works because dp(i, j) is SUFFIX information so even if it feels like we are constructing forwards when we do top down, we are not. We could also just do our bottom up DP as a suffix then "build forwards" (really it is backwards). In pull dp bottom up where dp[i][j] represents ...i ...j inclusive, if A[i] == B[j] in theory our best path can always take i-1 j-1 diagonal and we don't need to pull in i-1 or j as options. Same with push DP (with ...i-1 ...j-1 in this example), I think, if A[i] == B[j] we should only need to push dp[i+1][j+1] and not bother updating dp[i][j+1] or dp[i+1][j], but the reconstruction fails if we try to put those other 2 updates inside an else branch. Why? See this example: [1, 2, 2] [1, 2] The dp is: 0 0 0 0 1 0 0 0 2 0 0 2 So at our bottom right, we could go up-left for a valid LCS based on the sequences, and we attempt to in our reconstruction, but when we filled the DP table we did not produce the LCS with those steps because we skipped cells, so we mess up our reconstruction. And it did that because at the 1 we only pushed to down-right instead of down also, due to shortcut. The best lesson here is to just enumerate all edges in both push and pull DP. Even if we can skip and take shortcuts without affecting the final dp[-1][-1], if we don't take our shortcuts, then every dp cell is filled our accurately which is nice.
Solutions:
Writing Numbers
Problem Summary:
We are given N <= 1e18, we can write each digit N times, find the largest number we can write in a sequence 1, 2, 3, ...
Notes:
Binary search on the answer, use digit dp to calculate the # of ones used to write 1, 2, 3, ... that number. 1 will always be used up first. Can do dp(i, tight) and return (numOnes, numWays) or dp(i, tight, onesUsed). We can also use math for each digit, digits go in cycles, like the tens digit is 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 ... so we can compute full cycles and partial cycles and add up the contributions
Solutions:
Swap Game
Problem Summary:
given a 3x3 board we can make orthogonal swaps, construct 1 2 3 4 5 6 7 8 9 in order
Notes:
just do bfs, we can do it with numbers by swapping two digits, to grab a digit from the middle floor divide the number and mod 10
Solutions:
Range Update Queries
Problem Summary:
support range addition and point queries
Notes:
can just use a normal segment tree on a diff array, an update is diff[L] += X and diff[R+1] -= X, not listing other methods like sqrt decomp
Solutions:
Counting Bits
Problem Summary:
Count the # of set bits in all numbers from 1 to N, N <= 1e15
Notes:
can use digit DP in two ways, dp(i, tight, onesUsed) or dp(i, tight) which returns (numWays, numOnes). cannot use a global res in the DP (see code) -can also use bit operations math, write out bits: 0 0000 1 0001 2 0010 3 0011 4 0100 5 0101 6 0110 7 0111 8 1000 9 1001 10 1010 11 1011 12 1100 13 1101 14 1110 15 1111 For a given bit position, it goes in cycles like 0 0 1 1 0 0 1 1 so we can, for each bit, see how many full cycles + partials it would contribute
Solutions:
Sliding Window Cost
Problem Summary:
For each window of size k, find the cost to change all elements to be the same (cost is abs(newVal-oldVal))
Notes:
-Sliding window with median using two sorted containers or heaps works -Maintain the sum of each window bucket too to calculate the cost to change both halves into the median Can do online with persistent segment tree somehow
Solutions:
Sliding Window Median
Problem Summary:
Find the median for all windows of size K
Notes:
-Can use persistent segment tree for online (I think) -Can use two lazy heaps with rebalancing but the implementation is a bit tricky -Can use two multisets with rebalancing The invariant is that we should always add a node to the correct upper or lower half of the datastructure, don't just add it randomly. Then rebalance() just rebalances sizes only, it doesn't worry about fixing incorrectly ordered elements
Solutions:
Dynamic Range Minimum Queries
Problem Summary:
support point updates and range min queries
Notes:
could do sqrt decomp not listing though
Solutions:
Static Range Minimum Queries
Problem Summary:
answer range minimum [l, r] queries
Notes:
sparse table is best, can do seg tree and sqrt decomp and mo's -sqrt decomp online is built on indices, so each block tells us the minimum in that block, to query we scan over rootN blocks, to update we modify A[i] then recompute that blocks minimum in rootN. -we can do mo's, basic mo's is n * rootN * logN with a sorted container, or essentially n * rootN with a FastSet. We can also get O(1) transition with mo's. To do this, we compress all the values and build a sqrt decomp on values. To perform an update to our mo's window (adding a value) we increase the occurence of that by 1 in our decomp, same to decrease. So the individual freq[number] goes down by one as well as the block containing that number. To query the minimum we scan over blocks left to right until we find a block with a frequency >= 1 and scan over the numbers inside it in rootN.
Solutions:
Static Range Sum Queries
Problem Summary:
solve static sum queries [l, r]
Notes:
standard prefix addition
Solutions:
Nearest Smaller Values
Problem Summary:
for each index, find the index of the nearest smaller number on the left
Notes:
use a monotonic stack going right to left, good practice, not listing other worse solutions
Solutions:
Distinct Values Subarrays II
Problem Summary:
Find the # of subarrays with at most k distinct values
Notes:
We can do a sliding window and answer the rightmost end for each L, a bit tricky since we loop on L first, remember r defines the candidate we are considering, not yet in the window
Solutions:
Room Allocation
Problem Summary:
We have hotel customers staying at [arrival, departure] times. What is the min # of rooms to host them all? And print a valid room order.
Notes:
We greedily allocate rooms. Store a heap last[] which tells us departure times of people in the hotel. If no room is free create a new one. The heap holds (departureTime, roomUsed) to help bookkeep the available rooms.
Solutions:
Josephus Problem II
Problem Summary:
We have n people and remove the kth at a time, print the removals in order
Notes:
We can simulate it by finding the kth after our current position, the github code is a very clean implementation. We can also use a segment tree walk to for instance find the k-th 1 in a range, and point update to 0.
Solutions:
Josephus Problem I
Problem Summary:
We have n people and remove the 2nd person each time, print the order of who is removed
Notes:
Simple simulation. With an array we can scan left to right (cost N) but we only do this logN times as it halves. Or we can use a deque to simulate for O(n)
Solutions:
Josephus Queries
Problem Summary:
We have n people in a circle and remove the second person every operation. Who is the k-th person removed?
Notes:
Assuming we have 1-indexed people [1, 2, 3, 4, 5, 6, 7] and k=3. Obvious base case if k <= # of people removed, we return 2*k. Also if 1 person is left [1] we remove that person. // if we have an even amount of people // 1 2 3 4 5 6 7 8 // 1 3 5 7 // ^ // 1 2 3 4 // Every new person * 2 - 1 equals our old label // if we have an odd amount of people // 1 2 3 4 5 6 7, k=5, we remove 3, new k=2 // 1 3 5 7 // ^ but now this is the start // 7 1 3 5 auto oneIndexResult = solve(newPeople, newRemovalQuery); // get the normal 1 3 5 7 indexing oneIndexResult -= 1; if (oneIndexResult == 0) { oneIndexResult = newPeople; } // Now we have the index in the good sequence 1 3 5 7 // The index in our initial sequence is therefore 2 * that - 1 return 2 * oneIndexResult - 1;
Solutions:
Distinct Values Sum
Problem Summary:
Find the sum of # of distinct elements in a subarray for all subarrays
Notes:
Use contribution technique, each number contributes to a certain sum, we can go all the way to the right edge, or left up to but not including the occurrence of the same number
Solutions:
Distinct Values Splits
Problem Summary:
Find the # of distinct "splits" of an array, ways to partition the array such that no region has a duplicate
Notes:
For a given left index we can find how far right we can go while all #s are distinct, using a tricky sliding window. Then dp[i] is the sum of a suffix range of DP values.
Solutions:
Longest Palindrome
Problem Summary:
Find the longest palindrome of a string
Notes:
We can use manacher's O(n) or binary search + rolling hash. Supposedly palindromic tree can be used too.
Solutions:
Distinct Subsequences
Problem Summary:
Find the # of distinct subsequences of a string of letters
Notes:
dp[i] is the number of distinct subsequences for s[i:] where we must take i. We recurse to all 26 next letters. Implemented cleanly in C++ bottom up!
Solutions:
Finding Periods
Problem Summary:
Given a string, find all lengths that form a period inside the string
Notes:
Use z-function, for each length, don't check every character at a time, check the z-placement and the next size, using harmonic series. Could also use rolling hash to do the check.
Solutions:
Finding Borders
Problem Summary:
Given a string S, find the lengths of all prefixes that are also a suffix
Notes:
suffix array might work here but really not sure
Solutions:
String Matching
Problem Summary:
Given a pattern and a target string, find how many times it occurs.
Notes:
Can use rolling hash, aho corasick, z-function, or KMP. Z-function notes: |0 1 2 3 4 5 6 7| 8 9 10 11 12 13 14 15 16| 17 18 [indices] |a a b x a a b x| c |a a b x a a b x | a y z-box set 9-16 when we reached i=9 |0 1 0 0 4 1 0 0| 0 8 1 0 0 4? [z array] ^ now we try to solve at i=13, we are inside the z-box so we can re-use work the corresponding k is k=4, which has a z score of 4 All we are saying is the z-box perfectly equals our prefix, and so we can use some of that work. We know the letter after the z-box ending doesn't equal the letter after the prefix (index 17 doesn't equal index 8) but if we try to re-use work for index 13, it doesn't mean index 17 cannot match index 4. Also the reason we do z[i] = min(r - i + 1, z[k]) is literally all we have is the guarantee the prefix equals the z-box. We don't know if any letters past match even if z[k] is really big because we don't know if our z-box right edge would keep matching the prefix. Note if we started our for i loop at 0, we would initialize the z-box to 0:n-1 z[i] to n. Then for all future i we are inside the z-box and set z[i] to be 0 initially (set equal to z[k], which is the same as z[i], ends up being 0). Now nxt and pref are messed up and we loop fully inside n again. Also if z[k] ends up being less than r-i+1, we don't need the future nxt/pref code, we can just continue immediately in theory, since z[i] could never be greater (see github code)
Solutions:
Word Combinations
Problem Summary:
Given a target string <= 5000, and K words (sum length K <= 1e6), find how many ways we can build the target.
Notes:
Naive dp is dp(index) and we loop on future j to see if target[index:j+1] is one of the words, if so add on the next DP. This can use rolling hash or trie to O(1) check if it is a word. I did bottom up DP with a trie. I think aho corasick + dp is faster
Solutions:
Sliding Window Minimum
Problem Summary:
Find the XOR of all window minimums of size K
Notes:
-Can use the monotonic deque strategy -Can use two stack queue -Can use prefix and suffix mins and aggregate the answer across a min of two (alex wice style) -Sparse table will tle
Solutions:
Sliding Window Or
Problem Summary:
Compute the bitwise XOR of all window ORs of size K. Sparse table and n*BITS solutions will TLE.
Notes:
I used the two stacks aggregate solution to get O(1) transitions on irreversible actions. We can also build blocks of size K with prefix and suffix ORs, then any window is just the aggregate of a suffix OR and a prefix OR (alex wice solution).
Solutions:
Sliding Window Mode
Problem Summary:
Find the mode for each fixed window range, if multiple pick the smallest number
Notes:
We can track number -> frequency and frequency -> sorted container of numbers with that frequency. This requires us to track a variable for the current max frequency and do extra bookkeeping. Instead, we could track number -> frequency and a sorted container of tuples (-freq, number) and the add and remove functions become very easy. There might be a segment tree on values solution here, not sure, GPT said: 1) Segment tree over values (no set, no βsorted containerβ) Idea: coordinate-compress the values, maintain freq[idx] for each distinct value, and a segment tree where each node stores: bestFreq in that segment bestIdx = smallest index achieving bestFreq Combine rule: pick higher bestFreq if tie, pick smaller bestIdx Then: add/remove = point update on that valueβs index mode = root.bestIdx, output origValue[bestIdx] Complexity: O(n log m) where m = #distinct β€ n), memory O(m). This is the cleanest βnot using sorted containersβ exact solution.
Solutions:
Sum of Three Values
Problem Summary:
Given an array, find 3 indices with values summing to X or print impossible
Notes:
Sort then run two pointers for O(1) space
Solutions:
Sliding Window Distinct Values
Problem Summary:
for every window of size k, determine the # of unique elements
Notes:
just use a counter and measure the length of it, we can roll up all the logic into a single for loop easily
Solutions:
Sliding Window Xor
Problem Summary:
Calculate the XOR of all window XORs of a fixed size
Notes:
I simulated with a deque but we should be able to pretty easily compute how many windows an element will be a part of, and then include or not include the xor of that in the result, for O(1) space
Solutions:
Movie Festival Queries
Problem Summary:
We have N movies with start and end times [L, R]. R <= 1e6 (can be bigger, just use coordinate compression). We have queries [arrive, leave] for times we are at the theatre. How many movies can we watch per query?
Notes:
This is jump tables with implementation difficulty. First note we don't need coordinate compression since R is small. For each time, we want to know the next movie we could finish, so as small an R as possible, but L must be >= our time. I computed these with binary search but could do other methods. I used top down jump tables to help gain clarity. up(power, time) tells us the next movie time finish we are at. Base case is time = MAX_R + 1 points to itself (sentinel state) which lets us chain calls to the jump table recurrence without if/else branches. Also power=0 means we directly go to the next time we precomputed with binary search earlier. For a query, we arrive at `arrive`. Jump from large to small powers as long as our ending location is <= `leave`. The movies we watch are exactly 2^jump power.
Solutions:
Chessboard and Queens
Problem Summary:
Find the # of ways to place N queens on a board with blocked cells
Notes:
standard major + minor diagonal backtracking
Solutions:
Creating Strings
Problem Summary:
Generate all rearranged strings from an input string
Notes:
can do for loop inside backtrack (loop over letter, freq in a counter and place a letter or skip that type) or no for loop (if we skip a letter type, we must skip all)
Solutions:
Dice Probability
Problem Summary:
We roll a 6-sided die N times, what are the odds the sum is between A and B? N <= 100, B <= 6*N
Notes:
Naive solution is dp(rollNumber, currSum) with a for loop of size 6 inside. We can optimize the for loop to be O(1) with a prefix sum dp. FFT might be usable not sure.
Solutions:
Inverse Inversions
Problem Summary:
We need to make a permutation of size n with k inversions, or show it is impossible
Notes:
Start with the biggest number, we can choose how many inversions include this number. Take as many as possible. Keep repeating until we hit our target. Place the remaining numbers in order. We can always satisfy the requirements.
Solutions:
Sliding Window Sum
Problem Summary:
find the xor of all window sums of size k, large inputs
Notes:
O(k) memory to track last values, can use a deque or a ring buffer, could use O(n) memory for prefix/suffix but probably not intended
Solutions:
Shortest Subsequence
Problem Summary:
Given a DNA string, find the shortest subsequence not present
Notes:
Greedily pick the rightmost letter each time and increment a pointer, I had a clean implementation. Just remember nxt[index][letterType] should not be allowed to pick index, we need to only look at indices > index Imagine we have ACGTACGT, we first pick T, and then T again (technically doesn't matter), but we need to do j>i not j>=i to allow jumping to the same letter
Solutions:
Course Schedule
Problem Summary:
we have courses that are prereqs of other courses, print an order we can take them, or indicate if impossible
Notes:
I did kahn's where if A must be taken before B we have an edge from A->B and we track in-degrees. We could also track out-degrees and start at nodes with no out-degree on the normal graph (B points to A this time), but we need a reverse edge list to go back up anyway, so the first way is better. We can also use dfs to topo sort.
Solutions:
Restaurant Customers
Problem Summary:
Given restaurant visits with [L, R] how many max customers will be dining at a time? Assume all L and R are distinct. R <= 1e9
Notes:
-sparse seg tree with range add and range max -normal seg tree with coordinate compression -sweep line with coordinate compression -sweep line with event sorting -sort ranges by left edge, consume each left edge at a time, but pop smallest right edges from a heap while they don't overlap (or use sorted container but won't list, heap feels more apt)
Solutions:
Movie Festival
Problem Summary:
There are movies with [start, finish] times, how many can we watch?
Notes:
Sort by right edge, track previous right border. As soon as a left is is >= previous right, greedily take this movie and update the right edge. could also do cursed solutions like dp(movieIndex) and we take or skip and binary search, but skipping these weird solutions in the tags
Solutions:
Prefix Sum Queries
Problem Summary:
support point updates and max prefix sum range queries
Notes:
standard seg tree where a node stores total sum and max prefix. skipping sqrt decomp as a solution in tags
Solutions:
Maximum Subarray Sum
Problem Summary:
find the max non empty subarray sum
Notes:
Two ways to write: curr = 0 res = float('-inf') for v in A: res = max(res, curr + v) curr = max(0, curr + v) print(res) OR: res = -inf currMax = -inf for v in nums: currMax = fmax(currMax + v, v) res = fmax(res, currMax) return res
Solutions:
Stick Lengths
Problem Summary:
we have sticks of different lengths, we can modify a length by X for cost X, find min cost to make everything even.
Notes:
Say Y is the ideal cost. If Y were not the median, we could move it 1 closer to the median (say Y was too big). Now more than half the numbers would drop a cost, and less than half would gain a cost, so any value that is not the median could be improved.
Solutions:
Subarray Divisibility
Problem Summary:
find the # of arrays with a sum divisible by N
Notes:
standard lop off with remainders
Solutions:
Reachable Nodes
Problem Summary:
For each node in a DAG, find how many nodes it can reach
Notes:
We cannot sum up reachable nodes among children because children can reach the same node. Instead we maintain bitsets for what a child can reach, and OR them together. This is easy to do top-down (it shouldn't even be DP, this is just toposort with DFS). I implemented bottom up "dp" with kahn's.
Solutions:
Nearest Shops
Problem Summary:
Given an undirected unweighted graph, some nodes have a shop. Compute the nearest distance to ANOTHER shop for all nodes.
Notes:
For nodes that don't have a shop, it is a simple multi-source bfs. For nodes with a shop, we use voronoi partioning. Multi-source BFS but also stored an array claimed[node] is the region that shop claims as the closest. We can put tuples in the queue (currNode, srcNode) or just read from the array (probably better style). Now scan all edges, say we have: X-Y-Z and X and Z have shops. Say the voronoi partition ends up like X | Y Z. When we scan the X-Y edge we see the claimed array for X is different from Y, so claimed[X] borders claimed[Y] and is a candidate for the nearest shop. We can use this to help compute answers for nodes that contain shops. I did my multisource BFS with a seen array, a queue, a length = len(q) variable, a minDist array, and a claimed array, but we could just use a minDist array, a queue, and a claimed array if we write updates when we see a neighbor node, not when we pop. I think this was a cleaner way to do BFS that I never did.
Solutions:
Creating Strings II
Problem Summary:
Find how many ways we can reorder a string to get a new string
Notes:
its n! / c1! * c2! * ... 2 proofs: 1/ We could start with a single character frequency, there are V0 occurrences and we have 1 string Insert V1 occurrences of the next character, there are V0 slots, so V0 choose V1. Now (V0+V1) choose V2. These multiply together. It ends up simplifying to the final n! / c1! * c2! * ... fornula Proof 2: Normally n! permutations if characters are distinct. For a given setup, we have to divide by each character independent c! since the order of those doesn't matter
Solutions:
Binomial Coefficients
Problem Summary:
compute many queries of n choose k % MOD
Notes:
Remember these are like binomial coefficients, when we do (x+y)^4 that is: (x+y)^4=(x+y)(x+y)(x+y)(x+y) In this sum of terms, we would get x^3 * y, but what is the coefficient? its 4 choose 3 (we choose x from three of the terms).
Solutions:
Candy Lottery
Problem Summary:
each of N children gets 1 to K candies, compute the expected max
Notes:
Solution 1 Compute the exact change max=X for all X and take their weighted sums First, compute chance of a max being 1 From this, we compute chance of maximum being at most 2 We can compute the chance max is exactly 2 by subtracting. Repeat this Solution 2 Define indicators, I1 = chance max is >= 1, I2 = chance max is >= 2, etc Now when max is X, we score I1 + I2 + ... + IX So we need to compute the expected sum of the indicators Each indicator has a chance of occuring and we can use linearity of expectation to get their probabilities It just uses inverse probability to compute these
Solutions:
Bracket Sequences II
Problem Summary:
Given a prefix of brackets and a final sequence length variable N, find the number of well formed bracket sequences starting with that prefix.
Notes:
We have to place O open brackets and C closed brackets such that the closed surplus never exceeds `surplusOpen` O = opens to be placed C = closed to be placed R = remain (total to be placed) H = height (surplus we start at, we cannot touch -1) C - O = H # we will always drop down to height 0 at the end, so this is true Normal ways to place without the exceeding exclusion is C(R, O) Attempt a completion, say H=3 R=7 O=2 C=5 ) ) ) ) ( ( ) ^ violating index i=3 At the bad prefix, there are H+1 more closed than open, since we dropped H+1 steps At the suffix, there is 1 less closed than open (this would result in H total more closed than open, which is what we need) If we flip the prefix, there are H+1 more open than closed in the prefix, making a total of H+2 more opened than closed Computing # of bad sequences: R spots to place into Our flip increased the open brackets by H+1 O' = O + H + 1 We can prove this is bijective in a similar manner to bracket sequences I Every input completion will map to a different output, otherwise we can find two inputs that somehow map to the same output, find a differing index, and discover a contradiction Every output is mapped by an input because we could un-reverse the prefix and get a valid setup So there are C(R, O') bad sequences Final answer is C(R, 0) - C(R, O') C(R, 0) - C(R, O + H + 1) This would work even for invalid sequences, see: ( ( ( N = 4 O = -1 C = 2 R = 1 H = 3 C(1, -1) - C(1, 3) = 0 - 0 Not airtight but combinatorics should just end up as 0
Solutions:
Bracket Sequences I
Problem Summary:
Find the # of well formed bracket sequences of length N <= 1e6
Notes:
All sequences of length 2N must have N open and N closed brackets There are 2N choose N ways to construct these (place all opens somewhere, remaining ones are closed) However, we need to exclude sequences with prefixes that have more closed than open brackets, like ( ) ) ( This is sufficient to compute good bracket sequences, why? Necessary If a prefix contains more closed than open brackets, it is impossible for that closed bracket to ever be paired with something Sufficient Every prefix will have at least as many open as closed, we maintain a counter of surplus open Any time we encounter a closed bracket we will have at least 1 open bracket, and can decrement the counter Therefore every closed bracket decrements the counter by 1 and is paired with something All closed brackets will get paired with something safely and we will end up with a final surplus of 0 implying all open brackets were paired Now we need to compute the prefix-violating bracket sequences Write all bracket sequences, e.g. n=4 For each sequence, if it is bad we find the index it first has more closed than open, this prefix has 1 more closed than open bracket The suffix therefore has 1 more open than closed bracket We invert the prefix, now the total sequence has 2 more open than closed brackets, or more generally N+1 open and N-1 closed ()() β (()) β ()) | ( -> )(( | ( ) | ()( -> ( | ()( ) | )(( -> ( | )(( ) | )() -> ( | )() There are 2N choose N+1 of these resulting sequences. We are claiming we map a bijection to all possible resulting sequences. If a function is injective (every input maps to a unique output) and surjective (every output is mapped to by an input) then it is bijective. First show that it is injective, obviously every input has a different output. If two inputs had the same output, we can locate the different input bracket and show that it must produce two differing outputs, violating the claim. To show it is surjective, we work backwards. From 2N choose N+1 sequence (N+1 opening brackets), we can find the first prefix with more opening than closing brackets and re-invert that, producing an input with a prefix of more closing brackets than opening brackets, showing we can produce a state that maps to this. Therefore, this is bijective. So the final answer is C(2N, N) - C(2N, N - 1)
Solutions:
Dynamic Range Sum Queries
Problem Summary:
support point update and range sum
Notes:
not including sqrt decomp
Solutions:
Subarray Sum Queries II
Problem Summary:
compute the max subarray sum of different regions in an array
Notes:
standard seg tree (maxPrefix, maxSuffix, maxSubarraySum, totalSum), not including sqrt decomp, I think mo's could maybe do it also but not including
Solutions:
Subarray Sum Queries
Problem Summary:
compute the max subarray sum of the entire array after point updates
Notes:
standard seg tree (maxPrefix, maxSuffix, maxSubarraySum, totalSum). Since we print only the total subarray sum we can do an O(1) query for the entire tree for the reads. not including sqrt decomp
Solutions:
Tree Coin Collecting I
Problem Summary:
Some nodes have a coin on them. Given Q queries [node1, node2] find the minimum path distance from node1 -> node2 that touches a coin.
Notes:
Find dist on path with binary lifting or sparse table + euler tour, then find min dist on path to another coin (precompute values with multisource bfs). Not listing euler tour / sparse table since a lot of binary lifting problems can be solved with those anyway
Solutions:
Range Xor Queries
Problem Summary:
find range xor queries offline
Notes:
any other method like seg tree, mo's, etc, would be overkill so not listing
Solutions:
Array Division
Notes:
standard koko eats bananas style question
Solutions:
Subarray Sums I
Notes:
can use a variable sliding window or lop off method, used anti-hash salting to defend against python hashmap testcases
Solutions:
Subarray Sums II
Notes:
standard sum lop off problem, used anti-hashmap salting to defend against test cases
Solutions:
Towers
Notes:
binary search is easy, multiset or seg tree + coordinate compression can work but are such overkill I'm not listing as options in the tags
Solutions:
Playlist
Notes:
basic sliding window
Solutions:
Collecting Numbers
Notes:
easy greedy
Solutions:
Missing Coin Sum
Solutions:
Palindrome Reorder
Solutions:
Permutations
Problem Summary:
Construct a permutation from 1...N such that no two adjacent elements have a difference of 1
Notes:
We can put all even parities next to each other, then odds. The dangerous point is only when the two parities touch. For N=4, 4 2 3 1 would fail, we should use odds first like 3 1 4 2 to get the lowest number (1) at the boundary compared to the highest starting even number.
Solutions:
Increasing Array
Problem Summary:
can increase a number in an array by cost 1, find min cost to make array mono increasing
Notes:
just greedily increase based on the previous value, can also implement with a prefix max variable
Solutions:
Repetitions
Problem Summary:
Find the longest streak of a letter
Solutions:
Missing Number
Problem Summary:
given all numbers 1 to n except one, find missing
Notes:
use sum subtract or XOR
Solutions:
Weird Algorithm
Notes:
basic simulation
Solutions:
Stick Game
Problem Summary:
We have up to 100 groups of size <= 1e6, and n <= 1e6 sticks. We can remove any group. If we remove the last stick we win. Can we win? for which stick amounts <= n?
Notes:
naive dp(sticksLeft, turn) can be optimized to be dp(sticksLeft) means this player's perspective can win. Can do bottom up too. Don't think we can use a bitset here, so it remains n * k.
Exponentiation
Problem Summary:
Compute a^b % MOD
Notes:
Binary exponentiation, can do bottom up like in the code
Solutions:
Permutation Rounds
Problem Summary:
We have a permutation and each step the elements move based on their position in the permutation. What is the minimum # of steps before the array returns to normal?
Notes:
Cycle decomp + LCM of cycle sizes. However, taking the LCM of large numbers % MOD could explode. We cannot mod LCMs one at a time or we would get the wrong answer. Python passed with just one final MOD at the end of the lcm so not sure if there is some proof of a low upper bound on the LCM size. However, we could know that the LCM is just the maximum prime factors multiplied by each other for each number. So track the largest cycle size # of 2s in the prime factorization, 3s, 5s, and so on. Then multiply these together with intermediate MOD to prevent large blow-up
Solutions:
GCD Subsets
Problem Summary:
Find the # of subsets of numbers that have a GCD of x for all x <= N. N <= 1e5, A[i] <= N.
Notes:
First we want to know how many numbers are divisible by a factor, for all factors. We compute this with harmonic series in n log n. Then if we want to know how many subsets are divisible by X, we can pick or not pick any of those numbers, 2^(length of factors) - 1 (for empty subset). But that is the number of subsets divisible by X, not GCD is X. We need to exclude subsets divisible by 2x, 3x, etc. So we do a nested loop which is a second harmonic series.
Solutions:
Fibonacci Numbers
Problem Summary:
find the nth fibonacci number, N <= 1e18
Notes:
see CSES "throwing dice" explanation for more details on matrix exponentiation
Solutions:
Throwing Dice
Problem Summary:
We can roll a dice 1-6. Find how many ways (distinct orders) to get a sum of N. N <= 1e18.
Notes:
Note that f(x) = f(x-1) + f(x-2) + ... + f(x-5). We have a base case, the number of ways to roll a sum of 0 is 1. And then preceding cases f(-1) f(-2) etc all equal 0. So our initial state vector is [1, 0, 0, 0, 0, 0]T. Meaning f(0) (ways to get a sum of 0) is at the top, f(-5) is at the bottom. To progress this by 1 step, we can multiply it by this transition matrix: mat = [ [1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0], ] Because this is how vector * matrix multiplication works: # [newA0] [ ? ? ? ? ? ? ] [a0] # [newA1] = [ ? ? ? ? ? ? ] [a1] # [newA2] [ ? ? ? ? ? ? ] [a2] # [newA3] [ ? ? ? ? ? ? ] [a3] # [newA4] [ ? ? ? ? ? ? ] [a4] # [newA5] [ ? ? ? ? ? ? ] [a5] Like the new f(1) term is a 1 coefficient for each previous term, added up. We need to progress f(0) n steps to compute f(n). So we multiply our vector by that matrix n times. We can use matrix exponentiation. Top down is easy, bottom up is a bit trickier. We could define our base matrix (corresponds to an exponent of 0) and iterate small to large bits. Each bit, we square our base matrix. If that bit is set, we multiply our result matrix by it. We use an identity result matrix too as the initialized value. We could do bottom up log^2n by continually finding the largest set bit and multiplying our result by M^e (using precomputed M^e). Or log n if we use bit operations to find the largest set bit.
Solutions:
Factory Machines
Problem Summary:
koko eats bananas
Solutions:
Nested Ranges Check
Problem Summary:
Given a bunch of [L, R] ranges, find which ranges contain other ranges (L2 >= L, R2 <= R) and find which ranges are inside other ranges.
Notes:
Approach 2, O(n log n), TLE in python Sort ranges by (L, -R) To see if our range [L1, R1] contains a range, as we sweep right we are going to get increasing L2 which is required for containing But we need to know the smallest R2 on the right (either via suffix array, or sweep right to left), R2 has to be <= R But we need bigger ranges on the left still, consider [1, 3] and [1, 5] if sorted by (L, R), we won't see [1, 5] contains any range to its right So we want bigger ranges to be able to "see" smaller ranges on the right These issues occur only when L = L2, I thnk we could maybe sort by (L, R) and get it working still but with extra bookkeeping For the inverse question (are we contained in a range it is similar logic) APPROACH 1 Sort each range by (L, R) For each intervals [L, R], find the i...j range in our sorted array with an L_2 >= L and < R Find the smallest R out of elements in that range with a sparse table or seg tree, if R_2 < R, we contain a range Also check the edge case where R_2 = R, we can manually store the largest L for each R_2 Repeat process for second part of the question
Solutions:
Distinct Values Subsequences
Problem Summary:
count the # of subsequences with all distinct values
Notes:
basic combinatorics
Solutions:
Distinct Values Subarrays
Problem Summary:
Find the total number of subarrays with all distinct values
Notes:
could do some flagrant thing like binary search with a wavelet tree but it is so obscure we won't include those even in the mentions, main way is a variable sliding window
Solutions:
Knight Moves Grid
Problem Summary:
Reconstruct the minimum number of moves to reach all cells in an NxN chess board
Solutions:
Knight's Tour
Problem Summary:
Given a starting X and Y, find a path for a valid knight tour that visits every square once
Notes:
Backtracking is slow, instead we use a heuristic, prefer to go to the next spot that has the fewest neighbors it can go to, trying to "use up" rare neighbors earlier.
Solutions:
Hamiltonian Flights
Problem Summary:
Given a graph of N <= 20 nodes, find how many paths can go from 1->N touching each node once.
Notes:
This is a hamiltonian path. We can use bitmask DP to track visited nodes. I tried push bottom up dp: for oldVisitedMask, for oldJustArrivedAtNode, for outgoing edges in that node. To AC in time, I had to prune, for instance skipping oldVisitedMasks with the Nth node already visited (can probably just iterate on half as many masks since that is the maximum bit). Also did some other pruning in the hot paths like not reaching the Nth node early. 2^n * n^2 time.
Solutions:
Sereja and GCD 2
Problem Summary:
Find the # of ways to make a sequence of N positive integers <= M such that no two pairs have a GCD > 1. N <= 1e5 M <= 100.
Notes:
A naive solution is to have dp(primeFactorsUsedBitmask, countOfNumbersPlaced, numberWeAreConsideringPlacing) which is 2^25 * 100 * 100. We can observe that at most we could place ~25 numbers (all primes <= 100) so it becomes 2^25 * 25 * 100. First trick: Any prime >= M/2 could only appear once, so we don't need to track them in the bitmask. Instead they can be handled separately. This makes it 2^15 * 25 * 100. Then realize that instead of considering every number separately, we can compress them. If we consider a number with prime factors 2 and 5, we cannot consider any others with those 2 exact primes. So group numbers by their prime bitmasks, and iterate on the length of that list instead of all 100 numbers. Maybe roughly 2^15 * 25 * 50 (guessing). All of the big primes (> m/2) could be handled in a for loop outside the DP, so "we try taking 3 big primes" and then we kick off a dp call where countOfNumbersPlaced = 3. However if we don't do the big primes in a for loop outside, we can just handle that in our base case. When we run out of the mask buckets we consider placing (like the {2,5} bucket) we can see how many spots we have and choose to place big primes there. We could cache this so our base case isn't a for loop though that seemed to have slowed down the time complexity. But this drops states to 2^15 * 15 * ~50. Now we can cleanse masks. Once we take a prime bitmask option, and those primes aren't present anywhere else in the future masks, we should remove them, to collide DP states. So if we took a number that set the prime number 7 to true, but no future mask buckets contain a 7, we don't care to set that bit anymore. This led to a 0.03s solution (100x speedup). I think this logic could be used to handle the big primes too. So we let the big primes be their independent groups of size 1 in our mask buckets but they would get immediately cleansed. Might be equally fast.
Solutions:
Sherlock and Coprime Subset
Problem Summary:
Given an array, N <= 50, A_i <= 50. What is the largest subset s.t. every element is co-prime.
Notes:
Naive is knapsack DP with a size 15 bitmask for primes <= 50. We can optimize though. I don't think we need to consider every element. For instance 8 and 2 don't need to both be considered. Group numbers by their prime factorization and consider only one from each group. Also primes >=25 can only appear once and therefore can always be taken once, so we can cap our prime bitmask to 2 3 5 7 11 13 17 19 23. Similar to "Sereja and GCD 2". We could probably also re-order elements and then cleanse bitmasks (if no future element has 2 as a prime number, we can keep that bit unset always), we did that optimization in Sereja and GCD 2.
Solutions:
Little Elephant and T-Shirts
Problem Summary:
There are up to 10 people. There are 100 unique types of shirts. The people can own different combinations of shirts in their closet. How many ways can the people wear shirts such that no two people wear the same shirt?
Notes:
Keep a mask on the people and loop over shirts, simple
Solutions:
U - Grouping
Problem Summary:
We have n <= 16 nodes and graph[node1][node2] score if they are in the same group. Divide the nodes into groups with max score.
Notes:
Submask enumeration can give us a current bitmask state of taken nodes, and pick a new group. 3^n. We also can speed up by 2x by forcing a submask to contain our smallest bit that still needs to get grouped. We can get the same partitions (that bit must be in SOME group) and skip half the states. So 3^n / 2. Think of the half states like each node being a yes/no if it is in the new group, then we fix one node to always be yes. Also use dp to cache the score of each submask which can be n * 2^n (or maybe even faster).
Solutions:
D. Traveling Graph
Problem Summary:
Find the shortest cycle (starts and end at same node) in a graph that touches every edge. N <= 15 M <= 2000.
Notes:
Note that a cycle which cleanly visits each edge once and starts and ends at the same node is an euler tour. A graph is euler tourable if all nodes have even degrees. If nodes have odd edges, we would need to walk between those nodes once to flip their paths. There can only be an even amount of odd nodes (just think about that, basic logic). So we collect odd nodes and need to pair them together via the shortest path between them, as an extra walk, to make it "even" nodes. We can get the shortest distance between any 2 nodes with a variety of methods, like floyd warshall. Any path starting at X and ending at Y would only flip the parities of the end points. Use DP bitmask on the odd nodes, n * 2^n complexity (least set bit must get paired with something, like "Looking for order" problem on codeforces. Watch for weird traps like a graph of just nodes with no edges.
Solutions:
D. Tree Jumps
Notes:
A naive DP is to loop through all nodes except children for a given node, which should explode in complexity. Instead, we cache the result for the entire layer in a separate DP, and then call layer - all children for a given node, which amortizes.
Solutions:
E. New Year's Gifts
Notes:
One way to solve it: First pay all the y coins to everyone. Sort friends by lowest box beauty first, and our gifts. For each gift, find the rightmost friend we could make happy with that gift. This defines a contiguous 0...R range we can make happy with the gift. Find the highest coin amount in there, and use the gift for that friend. To find the highest coin amount there and update it, use a segment tree. I think we could also do a similar thing but we maintain a multiset of friends that we could make happy with our gift. We pop the largest coin value from it.
Solutions:
840. Magic Squares In Grid
Notes:
manual brute force, light pruning options
Solutions:
G - Longest Path
Problem Summary:
Find the longest path in a DAG
Notes:
Could also do this: 2) DFS postorder topo sort Run DFS, and when you finish exploring a node, push it into a list. At the end, reverse that list = topo order. Why it works: in a DAG, when you finish u, all nodes reachable from u have been finished, so u should come before them in topo order (after reversing).
Solutions:
Game Routes
Notes:
looping on edges amortizes
Solutions:
D. Even String
Problem Summary:
We are given n letters and need to partition them into groups of size n//2 and n-(n//2) with order mattering. How many ways can we do this?
Notes:
We can partition the groups with 0/1 knapsack as there are only 26 pieces. But once we partition them, say [a, a, b, b, b, b, f] [c, c, c, e, e, e, g] how do we know how many ways with order mattering can be formed? It would depend on the specific partition chosen. Like on the left side would be 2! * 4! * 1! options. We can find this out for one partition but not all possible partitions. Well observe that for a given partition, the ways to create the left is (a+b+f)! / a! * b! * f! [any element not in the right group]. The right is (c+e+g)! / c! * e! * g! [any element not in the left group]. Then the total ways is those two numerators times denominators. It doesn't matter what the chosen elements were, we get odd! * even! / each letter !. See editorial: https://codeforces.com/blog/entry/141425
Solutions:
B. New Year Cake
Notes:
could simulate or binary search on the most number of rows for each color type, I think there is even an O(1) math
Solutions:
C. Production of Snowmen
Notes:
Maybe there is a faster way, I did n^2, can use a queue or two pointers. Pick up to n positions for the first row. Pick up to n positions for the second row depending what works. Last row gets forced placements.
Solutions:
D. Christmas Tree Decoration
Solutions:
D. Fibonacci Paths
Notes:
A naive dp is dp(edge) and we recurse to all neighbor nodes with the right target size. Even if we determine ahead of time "only visit these neighbors with this size" it doesn't amortize, imagine n/2 edges in a spoke graph pointing to a center with outgoing edges from that center all going to a future valid node. We would get n/2 starting and n/2 ending nodes which is n^2. Instead, dp(node, needVal) is better. It can take on only E states (1 per edge) but it compresses things better. Because when we are at the center spoke node in the graph we will cache a result and not revisit things too many times.
Solutions:
C. Needle in a Haystack
Notes:
Take the extra letters from the bigger string and sort them. Those can go wherever. Now run two pointers and greedily take the smallest letter from that and from our target string. If a tie, we should always take from target string in case there is a smaller letter after (the sorted section can only go up lexicographically)
Solutions:
B. Optimal Shifts
Notes:
When we shit we spend 1, so shift until all are covered. Just find the longest gap of 0s we need to cross those all.
Solutions:
A. Operations with Inversions
Notes:
A number can always be removed if there is a bigger number on the left. Don't remove a number until we have removed all smaller numbers. So just use prefix.
Solutions:
B. XOR Array
Notes:
I randomly generated numbers but if we ever would match a previous prefix XOR it means we cannot use this number. Even generating numbers up to 10^9 there is an >99% chance of a prefix collision, I think. So I had to loop and regenerate. Also use a 1e8 bound not 1e9 otherwise when we are forced to place the single number at R that creates an XOR for L...R of 0, that forced number might be too big.
Solutions:
A. Little Fairy's Painting
Notes:
basic greedy
Solutions:
B. Wishing Cards
Notes:
Hard question, read `prefix DP.py` and the actual notes in the github problem URL for learnings. There is apparently another k^3 DP (on values instead of the people?) that is n + k^3 by the other. It can be solved with CHT too and li chao tree: https://codeforces.com/blog/entry/149003?#comment-1330854
Solutions:
C. First or Second
Notes:
One child will be left in line, fix this child. All children after them must be subtracted, all before must be removed. Subtracted suffixes are easy. The fully evicted prefix we can do: -Forced to take very first child in general -If a child is negative, we should take them when they are in the second position, if they are positive, move it to the front position first.
Solutions:
B. Impost or Sus
Notes:
A 'u' cannot be adjacent to another 'u', or we will have differing lengths for some 'u' in that chain. Also a 'u' cannot be on the edge. Scan left to right, make the edges 's'. If we encounter a 'u', if the previous letter was a 'u', we must change it. There is some other greedy logic I intuited but didn't make it airtight.
Solutions:
A. Yes or Yes
Notes:
Invariant: A merge cannot decrease the number of Ys. If we have more than 1, we fail. If we have <= 1, we can merge any two pairs.
Solutions:
RAONE - Ra-One Numbers
Notes:
This problem requires getting the difference of sums between even and odd indices, indexed from the right. We can easily do this without tracking an extra sequenceLength or anything. If we are at index i, and the upper bound is length n, we can just infer the position # 0 1 2 3 4 # ^ placing here must be odd index (from the right, 1 indexed) If needed the position from the left, I think we can just track it with a hasStarted boolean (which itself does not need to be part of the cache, I think it can get optimized out from the memory allocation like the tight flags can be), and a parity flag, as soon as we place our first digit that starts the sequence we can keep the parity flag updated. Needed C++ to pass the 0.1s limit
Solutions:
CPCRC1C - Sum of Digits
Problem Summary:
Find the sum of all digits for all numbers between L and R.
Notes:
When we add a digit, to know how much it contributes we also need to know how many numbers can be formed after adding this digit. So our digit DP is dp(i, tight) and returns (totalSum, numberOfDoableNumbers).
Solutions:
F. Jee, You See?
Problem Summary:
We are given n, L, R, Z. n <= 1000. L <= R <= 1e18. Z <= 1e18. Find how many sequences containing non-negative integers, of length n, exist, with L <= sum <= R and the bitwise XOR is Z.
Notes:
An easier subtask is to ignore the XOR. First find how many ways to construct N numbers with sum <= R. Then we will do F(R) - F(L-1). Going number by number is hard because there are so many options for each of the N spots. We will go bit by bit. When we consider the largest bit (say 32) we can only set that bit in so many spots, bounded by R. We can also figure out n choose k ways to place those bits, for a given amount of placements we choose to make. We will try placing 0 bits, 1 bit, 2 bits, all the way up to the max of this bit type we are allowed to place. Using combinatorics to help compute # of ways. But how do we know how many bits we can place? Say R = 16. The moment we place a single 2^4 bit we need to remember this when we try placing 2^3 bits. We cannot keep a `budget` parameter in our DP because it is too many states. Instead, our `budget` parameter will track how many of the next bit we can place coming from the previous states. Initially this is 0 since there are no previous states. We are going to start at a really big bit. If R has that bit set our budget goes up by 1. Say we can place up to 5 bits, try all combos from 0-5 bits. If we place 3, our budget left is 2. But when we drop a bit, the budget left will double because those bits are half as big (new budget 4). Note that at any point we only ever need 2*N budget. Why? Say we place every single bit set in our current layer, spending N budget. All of those bits are bigger than everything else remaining, like [1000, 1000, 1000, 1000] is bigger than [0111, 0111, 0111, 0111] so we don't need to worry about a budget bigger than 2*N. This feels a bit not airtight but just think about it some more. We start with a budget above of 0. As soon as we hit the biggest bit in R we can only place 1 of those numbers. If we don't, we can only place 2 of the next number and no smaller numbers. Any remaining smaller bits we want to place will get picked up by when we increment the budget by 1 due to a bit being set in R. Now to handle the XOR, if that bit is set in Z we can only place odd amounts of bits, otherwise even.
Solutions:
E - Digit Sum Divisible
Problem Summary:
Find how many numbers <= 1e14 are divisible by the sum of their digits.
Notes:
A reasonable upper bound for the maximum digit sum is 9 * 15, put a 9 in every spot. But when we run the digit DP, both the remainder and digit sum are changing, so it is hard to track both. Instead, loop over each possible digit sum separately. Then do dp(i, remainder, currentSum). We can do this trick to increment remainder as we slide (or just precompute d * pow10): (remSoFar * 10 + digit) % s
Solutions:
C. Classy Numbers
Notes:
I avoided the 2 x 2 extra dimensions by not caching highTight and lowTight. Guessing combinatorics can be used.
Solutions:
G. Maximum Product
Problem Summary:
Find the number with the greatest digit product that is a <= number <= b <= 1e18
Notes:
For digit dp we need dp(i, tightHigh, tightLow, isStarted) to get the maximum digit product. We need both low and high tight because we cannot diff results like dp(b) - dp(a - 1) since we are looking to get a maximum product, not a count of #s in a range. So if the maximum product we can form with numbers <= b, is 240, we cannot say that this is formed with numbers >= a. Our options for the number are: -neither tight -one is not tight, the other is tight -both are tight (shared prefix so far) In the cases where either is tight, we have deterministically arrived to this state from one exact sequence of nodes, we could not arrive here from more than one place, and no caching is needed since we can only enter this node once. I made the cache only i * isStarted size (meaning n * 2). We didn't need tightHigh or tightLow in the cache because there are stick patterns. Imagine DP charts are like a DAG, every node can recurse to multiple next nodes, and nodes can have multiple incoming edges from previous states. However whenever we are tight, it implies we came from one exact predecessor, so there is no point to cache it. See the code. The cache stores the maximum product that can be gotten from those DP params. I still allocated an n * 2 * 2 * 2 nxtDigit array which tells us the optimal next digit to pick in the reconstruction, but this actually isn't needed. We can just execute our dp then again trace back through every digit and determine the best one live, and transition to the next state. We can also allocate a nxtState n * 2 * 2 * 2 array (I did not) but again it isn't needed since we could determine it on the fly, like I did. Also if we want to track the nxtDigit array it cannot just be nxtDigit[i] or nxtDigit[i][started], it depends exactly on the specific node we are at in the DP DAG, even if we aren't caching things due to something being tight, we are at a unique node and need to know the selected digit to reach the next node. Note that I don't think we can directly construct the answer inside the DP function, we need to first call the DP function and then construct after. I need to think about this more. Not sure if doing bottom up changes this too. There is also a greedy to this solution I believe hence an easier rating.
Solutions:
12747 - Back to Edit Distance
Problem Summary:
Find the LCS of 2 permutations from 1...n. N <= 1e5.
Notes:
Imagine we transform A -> B with some operations There were some elements (possibly none) in A that we never deleted They appear as a subsequence in B too Everything else got deleted and therefore re-inserted So the optimal answer requires finding the LCS and then the remaining elements each use 2 operations How to find LCS? [1, 3, 5, 4, 2] [1, 5, 4, 3, 2] remap the first array to be the indices they occur in the second array [0, 3, 1, 2, 4] now an increasing subsequence in the indices implies we saw those from left to right in array B we also saw them from left to right in array A because we read it from left to right this is a common subsequence then as it appears left to right in both A and B to find the longest common one, then we just find the LIS on the indices array
Solutions:
F. PolandBall and Gifts
Notes:
We decompose the groups into cycles. If k balls forget their gifts, and we can form a subset sum of exactly k, that is the minimum. Otherwise, we take a bunch of full group cycles, and one non full group. So if our cycles are size [2, 5, 7, 30] and k= 20 we can fully lose the 2+5+7 balls and 6 balls in the 30 group also forget. Any time X balls in a group forget, and we want to minimize lost gifts, place them adjacent, and we lose 1 extra gift. So the minimum is either k or k+1 depending on if the subset sum k is feasible. Some optimizations (not sure how many will be needed): -rootN unique sizes observation to get fewer components to knapsack on -Bundle splitting technique, but I don't think the time complexity changes when S is <= 1e5 anyway, worst case the cycles are size 1, 2, 3, ..., root N where bundle splitting does not reduce anything. -Bitset for feasibility They might all be needed due to tight time limit. Used union find to find component sizes. For the max possible gifts lost, even sized components can lose their full gifts with just half the members forgetting. Odd sized components we can basically use a greedy. Monotonic deque could maybe work too but I think it is no different complexity from bundle splitting since S is capped.
Solutions:
Tree Upshot
Notes:
WLOG, the root node can be either color, adding 1 to the cost. Each child node should be an inverse color by force, because their costs would be 0, and also provide freedom for either color for the grandchildren. This means even depths (0-indexed) we get to pick colors and odd depths are forced to flip. For each node at an even depth we can pick its color, and all its children get flipped. This means our subset sum sizes are (# children - 1) for each even depth node. We need to find the largest subset sum we can form with these components (just the sum) and also the sum closest to 0. Instead of the one closest to 0 I found the one closest to half the total sum with a bitset. There are at most rootN unique component sizes and we are doing feasibility checks so we can use both bundle splitting and bitsets but not sure how necessary either is.
Solutions:
E. Lucky Country
Notes:
Observe there are at most rootN unique component sizes. This means we can knapsack on root N items + how many times they occur. Then we binary bundle split on the # of island sizes we have for each island type, making the number of bundles basically log(something) * rootN. But actually I think the log factor doesn't exist. It should get amortized globally. Say we have worst case rootN unique sizes then each unique size would only have like 2 bundles. Not fully formalized but seems right. Also note there are at most 2 root N unique sizes if all numbers are distinct and sum to N. Just imagine filling up a right triangle, first a column of height 1, on the right a column of height 2, and so on up to root N. That should fill half of an N x N square. I don't think the bundles is necessary also, see my "Subset sums DP" guide on github, I think the bundling doesn't do anything. Because if we have rootN unique sizes each occurring once, the # of components does not change with bundling, but I couldn't get AC without bundling (even on C++). For each of those ~1000(?) bundles we can take or not take it like 0-1 knapsack style. We need to know the min number of islands to be merged for all island sizes from 1 to N. So we maintain dp[islandSize] = min # of merges needed. So n * rootN. Initialize base case only to dp[0] = 0, not dp[somePremadeIslandSizeWeHave] = 0. Because the dp array initially represents answers before any items are processed. Otherwise we would accidentally double count things, like if we initialized dp[2] = 0 then we considered our first island of 2 nodes we might form dp[4] but that's invalid, we used the same island of 2 nodes twice. I believe we could use some sort of monotonic deque DP here too but I think it might not improve complexity since S is bounded by 1e5 and so bundle splitting actually doesn't improve complexity.
Solutions:
B. Little Pony and Harmony Chest
Notes:
A lot of lessons in this: Number theory portion is no two numbers share a factor if they don't share a prime factor so we decompose into primes and track taken prime bitmasks (~17 primes <= 60). Top down unoptimized python TLEd. Bottom up also TLEd but is solveable (see yawn-seans: https://codeforces.com/contest/453/submission/222623781) His optimizations precomputed valid choices per-mask so it wasn't repeated inside the hot path I had. He also didn't need a prevChoice array in the dp reconstruction, since all DP transitions are a +- transition he just subtracted a mask to get to the previous state. He also did some pruning thing that required sorting. Top down DP is pull. It feels like we are picking the "nextState" choices but it is actually a prevState since it is recursive. Note that this reconstruction required choices with an extra `n` parameter as we have to iterate back on the `n` axis: prevMask = [[-1] * (fmask + 1) for _ in range(n)] prevChoice = [[-1] * (fmask + 1) for _ in range(n)] Yawn sean also had a second entirely different solution that was faster, seems to be some number theory coprime thing: https://codeforces.com/contest/453/submission/222618899 That is harder than the `3` rating I gave for number theory (that was the rating I gave for the main DP solution)
Solutions:
A. Gym Plates
Notes:
A few lessons about N-ary masks here. 1/ Why is it that all binary masks uniquely map to base 10 integers? Because 2^x is > than the sum of all 2^(y) where y < x. If we binary masks somehow had the same base10 integer: We claim there is at least one bit they differ by. Find that highest bit. The other mask cannot make up that 2^x sum with all remaining bits. 2/ We have a mask which is a number in base 10, but we are treating it as some encoded base 3 data. So 100 (base 10) might "mean" 10201 in our base 3 info. 3/ To extract the value of one of those encoded base 3 digits, like the 2, we can floor divide our base10 number by 3^i and take the remainder. Remember a floor division on 100 in base 10 is doing a right bit shift on the SAME number in base 3.
Solutions:
C. Looking for Order
Notes:
A naive solution is have a placedItemsMask and enumerate n^2 combos to try placing them for O(n^2 * 2^n). Instead we can consider a mask and know the least set bit can either go alone or be paired with one of n other options. This is O(n * 2^n). We can do top-down pull dp by looking at a mask and saying that mask must have come from some submask where the least set bit was just set. We can also do pull dp bottom up. I did push dp bottom up though which feels more intuitive. For a given mask we can push a possible route to the future state. For dp reconstruction we cannot update nextDpState[state] -> nextState every time we improve dp[nextState] from our current state, that is the wrong order. That means the best incoming edge into nextState is this edge, NOT the best outgoing edge from our current state. Instead we store a prevState and also prevBagsPlacedChoice (optional, but may be easier to help with reconstruction to avoid computing it twice), then reconstruct backwards. To get python to pass in time I had to skip masks I was enumerating on if dp[mask] = INF (no point pushing here). This can occur because not every mask is reachable when following the "only set least unset bit" rule.
Solutions:
E1. Prime Gaming (Easy Version)
Notes:
We cannot represent the current numbers taken so far with a bitmask, because we don't know how to capture the initial configuration of the board. We want to enumerate each board and also solve those boards quickly (if Alice can force a 2 at the end). Our bitmask will capture both the board-state and implcitly the initial configuration. We excise a bit from a spot when we remove it. Track the number of piles in the dp too since just the mask is not enough info, like 001 and 01 are different masks in this case. We also track whose turn it is which is a derived value. We can therefore remove that parameter, but we cannot remove the concept of turns like we can for other game theory problems: You can βdo away with turnsβ only in games where the move rule doesnβt depend on whose turn it is, i.e. both players have the same move set and the only difference is objective (maximize vs minimize). Then you can write a single recurrence. GPT says: You can βdo away with turnsβ only in games where the move rule doesnβt depend on whose turn it is, i.e. both players have the same move set and the only difference is objective (maximize vs minimize). Then you can write a single recurrence like: V(s) = max over a in Moves(s) of V(next(s, a)) if the game is actually βcurrent player always maximizes their own payoffβ and the payoff is from the current playerβs perspective, so it flips each turn automatically. But your gameβs payoff is not βwhoever moves last winsβ or βcurrent playerβs payoffβ. Itβs a fixed payoff: Outcome = value of final pile x. Alice always wants max x, Bob always wants min x. That payoff is anchored to Aliceβs preference, not βcurrent playerβ. So if you write βcurrent player maximizesβ, Bob would be maximizing too β which is the wrong game. When can you eliminate the turn variable? Two common cases: Zero-sum with player-relative payoff e.g. winner/loser games where value is +1 for current player win, β1 for loss. Then you can store V(s) = advantage for player to move. Recurrence becomes: V(s) = max over a of ( βV(next(s, a)) ) No explicit βwhose turnβ needed because the sign flip encodes it. If turn is derivable from the state like here: turn parity is determined by numPiles (or moves made). Then you donβt store turn, you compute it from parity (which we already did). But the concept of alternating max/min still exists. ------------------------- Example 1: Take-away game with βcurrent player winsβ payoff (sign-flip trick) Game: There are N stones. On your turn, remove 1 or 2 stones. If you take the last stone, you win. Define: V(n) = +1 if the player to move from state n will win with perfect play, else β1. Then you donβt need a turn variable because βplayer to moveβ is built into the definition. Recurrence: If you remove 1 stone, the opponent faces nβ1, whose value is V(nβ1) for them. For you, that outcome is βV(nβ1). If you remove 2 stones, the opponent faces nβ2, whose value is V(nβ2) for them. For you, that outcome is βV(nβ2). So: V(0) = β1 (no moves: player to move loses) V(n) = max( βV(nβ1), βV(nβ2) ) for n β₯ 1 No explicit turn variable is needed because the sign flip automatically encodes whose turn it is. Example 2: Turn is derivable from the state (parity) Game: Players alternate turns, but the number of moves made is determined by the state itself. In your pile-removal game: Start with n piles. Each move removes exactly one pile. At a state with numPiles remaining, exactly (n β numPiles) moves have occurred so far. Therefore: If (n β numPiles) is even, it is Aliceβs turn. If (n β numPiles) is odd, it is Bobβs turn. So you do not store a βturnβ variable in the DP state. You only store (numPiles, mask) and decide whether to take max or min based on parity. The alternating max/min logic still exists, but the explicit turn variable is eliminated.
Solutions:
C. Square Subsets
Notes:
A perfect square is one whose prime factors all have even exponents, since it implies we can multiple a number with X exponents by itself, like 2^x * 2^x = 2^(2*x) and 2*x is always even. We maintain a bitset for the current odd frequency prime factors (only ~19 up to a[i] <= 70). We enumerate each unique type of number and see how many ways we can pick an odd amount from this number type and how many to pick an even number type (basic math trick, they're the same amount). I used push-dp on bottom up but also wrote a top down that MLEs.
Solutions:
E. Fish
Notes:
Naive dp solution is dp(currentDeadMask) and it stores the probabilities of size n to end on that fish last. We enumerate n^2 pairs inside for a total of n^3 * 2^n which is too slow. Instead make dp[currentDeadMask] be the chance we ever reach this state. We can use push-dp (most natural). For a given mask, we choose 2 living fish and make them battle, then push into the next state.
Solutions:
B. Modulo Sum
Notes:
O(n + m^2) solution: If n>m, we must have a repeat modulo in some subarray by pigeonhole principle. Look at prefixes with their modulos. Find two prefixes with the same modulo, then [l+1...r] must have a sum modulo 0. If n<m, we can solve it in O(n * m) with a simple knapsack DP. O(n * m / W) solution: I solved it with a bitset which is n * m / W complexity. Track doable remainders of subsequences with a bitset. Use a clearance mask to keep the bitset small.
Solutions:
E - Knapsack
Notes:
we need to know the most blocks we can take for certain remainders. Claim is that the max remainder we need to consider is 8*840 - 1. Why not a larger remainder? Any sum >= 8*840 necessarily has a block of 840 fully formable by one number type. We could subtract that full block and have dp[remainder - 840] + 1 already capture that case. What is the lower bound on the remainders we need to consider? Not sure but I tried 839 and it got WA. Seems hard to prove a lower bound.
Solutions:
E - Colorful Subsequence
Notes:
A naive solution is dp(index, currentBallsRemoved, lastBallColor) but this is too many states. We need to drop lastBallColor. How? We store 2 values in our dp. DP(index, currentBallsRemoved) -> [(highestScore, lastBallColor), (highestScoreWithDifferentBallColor, differentBallColor)]. Now no matter what the new ball color is, we know the continued path we can take.
Solutions:
D. Yet Another Minimization Problem
Notes:
For a given pair (x+y)^2 it expands to x^2 + y^2 + 2xy. So the x^2 and y^2 both occur regardless of which array they are in. We need to minimize sum of 2xy for all pairs for both arrays. Maintain dp(index, prevASum, prevBSum) where prevBSum is a derived value. Try knapsack DP. A bitset could be used with a different DP formulation (I think regarding minimizing differences between the two arrays?) but I didn't learn it.
Solutions:
Moving Robots
Notes:
dp(startR, startC, endR, endC, stepsLeft) tells us the chance we avoid ending at (endR, endC) with the starting coordinates. Then we use linearity of expectation. I think we can use matrix exponentiation to go from k to logK in the time complexity. The only βasymptotically fasterβ idea If k were huge, you could use matrix exponentiation. β’ Build a 64 by 64 transition matrix M, where M[u][v] is the probability of moving from square u to square v in one step. β’ After k steps, the probabilities are given by M raised to the k-th power. The probability that a robot starting at s ends at t is M^k[s][t]. β’ Compute M^k using fast exponentiation in O(64^3 * log k) time.
Solutions:
Counting Towers
Notes:
We can use matrix exponentiation to get log(N) per query.
Solutions:
Apple Division
Problem Summary:
Can generate all 2^n subset sums in 2^n time and just check those. Can split into halved groups and generate 2 * 2^(n/2) sum groups. We will analyze subsets from the left half and combine them with subsets from the right half. We cannot do a nested loop for leftSum in ... for rightSum in ... since that is 2^(n/2) * 2^(n/2) which is just 2^n. Instead for a given sum from the left, we want to know the best sum frop the right to put into the same group A. We can use binary search or two pointers on each half. So leftHalfSums we set i=0 and rightHalfSums we set j =n-1 and run a sorted two pointers.
Solutions:
Counting Tilings
Problem Summary:
Your task is to count the number of ways you can fill an n * m grid with 1x2 and 2x1 tiles.
Notes:
We need to, for a given mask, see the next transitions we can arrive to. Our future mask # of ways will be the sum of previous cases it came from. I used a DFS to generate the next transitions. I used ndp because we are updating masks in a seemingly random order so this felt safer. ndp needs to reset to [0, 0, ...] not just duplicate the dp.
Solutions:
Book Shop II
Notes:
# Decompose all 100 book types into log(copies) bundles, worst case is n * log(copies) bundles # Run 0/1 knapsack on those enumerating the dp[cost] -> max pages, for each item, which is n * cost # Final time complexity: O(n * log(copies) * maxBudget) We can drop the log(copies) factor by using a monotonic queue. Both solutions in github.
Solutions:
Elevator Rides
Notes:
Consider a mask where 1 represents we have put this person in an elevator (either already sent to the top, or in the current elevator we are loading up). We want to know given a mask, the min (elevatorsSentToTop, currentElevatorWeight). In top down, we can look at a mask dp(mask) and say "this state must have come from a previous state with 1 fewer bit". So we enumerate all those n submasks and check their previous (elevatorsUsed, currentWeight) and see if we can fit this new bit in to that or if we need a new elevator. This is pull DP since top down cannot push, unlike bottom up, as it requires updating things above the recursive tree and creating side effects. In bottom up, we can do push or pull. We can enumerate the masks in order from 1 to fullMask because each mask requires a submask numerically smaller than it (1 fewer bit set) so that order is safe. We could also enumerate masks first by # of bits set then generate all masks with that many bits set via something like gosper's hack See solution for both push and pull bottom up. Also note in our solution we used (0, 0) as the base case and added 1 to the answer. We can always add 1 to the answer (instead of checking if the last un-closed weight is 0 or not) because that can never happen. The last item we pack will always make sure our last un-closed knapsack has weight in it. We could also make the base case (1, 0) instead.
Solutions:
Meet in the Middle
Notes:
Split the array in halves and generate subset sums for each half. If we are smart we can generate a subset sum in 2^(n/2) instead of n*2^(n/2) by using sum doubling (see solution). We can use dictionaries two coalesce the sums from each half, or sort and use two pointers. Bitmask DP can also generate a subset sum half in 2^(n/2): def subset_sums_lowbit(a): m = len(a) sums = [0] * (1 << m) for mask in range(1, 1 << m): lsb = mask & -mask # value of lowest set bit, e.g. 8 b = (lsb.bit_length() - 1) # index of that bit sums[mask] = sums[mask ^ lsb] + a[b] return sums
Solutions:
School Excursion
Notes:
Supposedly NTT can be used but I don't understand it. Bundle splitting could be used but I don't think it improves complexity since we have at most root N unique sizes, also wasn't needed to AC.
Solutions:
3780. Maximum Sum of Three Numbers Divisible by Three
Problem Summary:
Given an array of numbers, pick 3 numbers with a sum divisible by 3. Find the max sum. N <= 1e5.
Notes:
We can do basic combinatorics like pick three numbers all with remainder 2. I think this complexity scales nicely with the remainder and pick count, it's like n + f(rem, pick) instead of n * rem * pick. Can use bucket sort for each remainder bucket. Knapsack dp is the multiplied time complexity. We can do it with ndp or without in a few different ways.
Solutions:
Maximum Subarray Sum II
Problem Summary:
Given an array of n integers, your task is to find the maximum sum of values in a contiguous subarray with length between a and b. a <= b <= n <= 1e5.
Solutions:
Y - Grid 2
Notes:
# of ways without obstacles is combinatorics sort stones by (r, c) then enumerate on stones, for each stone, we have a dp[stone] that tells us the # of paths that first get blocked by this stone for each stone, iterate over all previous stones and subtract the # of paths that wouldβve gotten blocked by that (multiples by the # of paths between the two stones) return dp[-1] (we append a faking ending stone at the end) n^2 + combinatorics
Solutions:
F - #(subset sum = K) with Add and Erase
Notes:
dp[x] is the number of ways to make X as we add elements, we scan right to left to avoid a new_dp array but as we remove, we actually scan left to right, for instance if we have two 5 balls, we have 2 ways to make 5, and one way to make 10 if we remove a 5, we donβt subtract 2 ways from 10, since there arenβt actually 2 ways to make 5 anymore, so we need to go left to right
Solutions:
F - LIS on Tree
Notes:
using the n log n LIs with binary search, we dfs on nodes and add them, recurse on children, then rollback, I used my rollbackLis template
Solutions:
F - |LIS| = 3
Notes:
store the minimum number ending for a sequence of length 1, 2, and 3 and do digit DP the top down (i, min1, min2, min3) would just be a bottom up of min1*min2*min3 and we iterate on iβ¦
Solutions:
L - Deque
Notes:
dp[l][r] is the maximize difference a player can get (from their perspective) we donβt need a state tracking whose turn it is (we could also derive whose turn it is and use extra logic, but we donβt need that either) we either take the left or the right and subtract the max difference the opponent can get we can also make the dp just maximize the score and be turn-independent but I think we need prefix sum dp for this also the outer loop iterates on length, inner loop on the left boundary we can space optimize to 1d but it is more complicated
Solutions:
D - Knapsack 1
Notes:
I did bottom up where rows are indices and column is size 100 bag weights dp[i][w] represents the best value we can get using the first i items with exactly a bag weight of w, then we return max(dp[-1])
Solutions:
K - Stones
Notes:
top down I did C++ dp(stonesLeft, playerTurn) and inside we loop over removing all stones, I don't think we need to track the player turn though, just if this current player can win bottom up without that 2x parameter I did dp[x] tells us if the player going now can force a win with x stones left, then we loop on each x and then each move and if the previous dp couldnβt win, we can
Solutions:
E - Knapsack 2
Notes:
dp[madeValue] tells us the minimum weight we can make, outer loop iterates on elements, inner loop I did backwards and used push dp to re-use the same dp array
Solutions:
I - Coins
Notes:
I did dp[i][heads] is the probability to get exactly heads using coins 0β¦i loop on i, then heads can optimize to 1d space by just preserving the previous layer
Solutions:
Q - Flowers
Notes:
dp[x] tells us the max beauty for an increasing subsequence ending in x initialize to only 0s then iterate over each flower, use a segment tree range max query point update to apply the updates as we go
Solutions:
C - Vacation
Notes:
standard dp state machine space optimized π
Solutions:
B - Frog 2
Notes:
same as frog 1 but N*K this time, 1d dp go left to right tells us the answer to reach stone i
Solutions:
A - Frog 1
Notes:
standard left to right 1d dp where dp[i] tells us the answer to reach stone i
Solutions:
F - Construct a Palindrome
Notes:
I did a bfs on node pairs (node1, node2) with a triple loop on the edges and letters I think that we have n^2 nodes. And for each node it visits its edges at most N times (for every adjacent node pair) so I think we get like n^2 + m*n or something Iβm not sure but this makes sense
Solutions:
O - Matching
Notes:
dp(manIndex, womanMask) looks like n * 2^n states but manIndex is derived so it is 2^n inside it, we iterate over all women indices to pair up that man with, so n * 2^n total complexity and 2^n space
Solutions:
E - Stronger Takahashi
Problem Summary:
need to move from top left to bottom right, there are walls and we can destroy a 2x2 region, minimum # of destroys needed?
Notes:
clean impl is loop on all adj cells directly, those have cost 0 if not a wall then loop on all destroyable regions like (2,1) and just push a cost+1 node for that