I’d be curious if someone here could explain the initial nonsensical answer for the coin change problem.
I understand that not giving the lower bounds effectively lets it find an arbitrarily low (generally very negative) number of coins that satisfy the problem (so the minimization would basically go to negative infinity).
But why would it respond with “to get 37 in the lowest number of coins I’ll give you 37 ones”?
Or is this kind of like “if the answer to your minimization problem is negative infinity, the answer is undefined behavior”?
sirwhinesalot 5 hours ago [-]
It has to do with the algorithm Z3 uses to do optimization I think (there are different ones).
It works in a "bottom-up" manner, first giving the minimization goal its lowest possible value and then it tries to "work its way up" to something satisfiable.
So in this case there's something like:
goal = c1 + c2 + c3
minimize goal
(even if you write minimize (c1 + c2 + c3), it's still creating that goal variable internally, so to speak)
It now wants to give goal the lowest possible value it can assign it, even if not satisfiable, to start the process. It looks at the bounds of c1, c2 and c3 to determine this, and they're all -infinity.
The lowest possible value of goal is -infinity, which the optimizer has no idea what to do with, and just basically gives up. When used as an application, Z3 will tell you something's wrong by spitting this out in the terminal:
(+ c1 c2 c3) |-> (* (- 1) oo)
A different optimization algorithm, like one that works "top-down" (find the first satisfiable assignment, then make it better) would actually be able to converge. I think OptiMathSAT would be able to handle it because it also does top-down reasoning for this.
CP solvers and MIP solvers all enforce variable bounds so they don't have this issue.
gopalv 14 hours ago [-]
> is this kind of like “if the answer to your minimization problem is negative infinity, the answer is undefined behavior”?
Yes, it picked a valid result and gave up.
I got a similar nonsensical result in my run-ins[1] in with SAT solvers too, until I added a lowerbound=0.
The "real life" follow-up question in interviews was how to minimize the total number of intermediate rows for a series of joins within in a cost-based optimizer.
37 is irreducible in the problem statement, so the answer is 37.
Think about it purely in the set-theoretic sense "what is the minimal set containing 37 elements?" the answer is "the set containing 37 elements."
Dylan16807 12 hours ago [-]
The fixed 37 is the value of the coins. It's very easy to reduce the number of coins.
Either you misunderstood something or please explain more. Note that both the working and broken versions have the same 37 in them.
And the problem statement starts with no coins chosen. It had to actively choose pennies to get that broken result. If you told it about the coins in a different order, it probably would have given a different answer.
I guess z3 is fine with it, but it confuses me that they decided pips wouldn't have unique solutions
adsharma 8 hours ago [-]
Those of you wondering about how to use z3, please consider coding in static python (not z3py) and then transpile to smt2.
You'll be able to solve bigger problems with familiar/legible syntax and even use the code in production.
ioma8 3 hours ago [-]
Can you expand on that? What you mean by static python?
G0lg0thvn 13 minutes ago [-]
z3py is going through python’s interpreter to talk to the c++ core of z3; this creates a performance bottleneck as the system size grows. The native input language of z3 is smt2, so coding in plain old python with type hints and then transpile to smt2 would, ideally, help with the bottleneck enabling solving of increasingly complex systems.
That’s how I understand it at least! Someone please correct me if I’m off base.
weinzierl 6 hours ago [-]
The way you can write code of the target DSL without wrapping it in a string both gives me fears and excitement:
solver.assert((&x + 4).eq(7));
My subconsciousness shouts "MISSING QUOTES" while my ratio says "Calm down, that's nice and clean and safe and how it's supposed to be - ever has been".
mzl 5 hours ago [-]
One missing thing in Rust is that the comparison operators are hard-coded to return bool values, which means that it is not possible to build up full expression trees form normal code using operator overloading. Thus the need for functions like `eq`.
In general, for modeling layers embedded in programming languages, having operator overloading makes the code so much better to work with. Modeling layers where one has to use functions or strings that are evaluated are much harder to work with.
aDyslecticCrow 5 hours ago [-]
I'm a bit skeptical. When used right; operator overloading is an elegant extension of a language. But all too often i feel they are a too powerful abstraction tool that can be abused to make a library utterly incomprehensible and make intuition about language syntax unreliable. I see this in c++ at times.
In particular if you've looked at linear algebra libraries that have two or three different multiplication operators (cross, dot, bitwise), if they commit to use functions from the start they end up faar more readable than any attempt to abuse operator overloading.
I prefer;
- DSL (domain specific language, especially for symbolic solving such as mathematica)
- Strings (That's just a DSL hiding in a parse() function, that can be called from a different language. Write the code as if its a DSL, but without pulling in a new tech stack)
- Functions, but preferable in a chain-able style. sym("a").add(1).eq(sym("b")).solve() is better than solve(eq(add(1,a),b))
I have seen operator overloading done right... but I've also cursed loudly at it done badly.
Rust in particular use up most symbols for other things (borrow checker takes alot of them, and "==" is forced to be consistent) so you end up with only "+ - * /" as overload-able. "+" and "-" being the least abused from my experience perhaps makes that a good compromise.
mzl 4 hours ago [-]
I would say that for the specific use-case of a modeling layer for something like constraint programming, SMT, or MIP, full operator overloading is superior. I've written such layers in many languages and used them in even more languages, and it really does help a _lot_ to have the standard mathematical expressions. All the other alternatives become way to cumbersome to write effective models with in my experience.
Is it worth having operator overloading in a language or not? That is a different question. People do bad things with every feature, including operator overloading. I'm slightly in favor of it, but I'm not a fan of using it for anything other than either immediate arithmetic or creating expressions trees representing arithmetic.
mohsen1 4 hours ago [-]
This was such a pleasure to read! Thank you for sharing!
My understanding is that solvers are like regexes. They can easily get out of hand in runtime complexity. At least this is what I have experienced from iOS's AutoLayout solver
mzl 4 hours ago [-]
Any tool that can solver hard problems will also have non-trivial runtime behavior. That is an unfortunate fact. But you are also correct in that combinatorial optimizaton solvers (CP, SAT, SMT, MIP, ...) often have quite sharp edges that are non-intuitive.
For the iOS AutoLayout, what kind of issues have you seen, and how complex were the problems?
mohsen1 3 hours ago [-]
It’s a familiar error for iOS developers: “constraints are too complex to solve”
klft 3 hours ago [-]
Would a SQL optimizer use a generic solver as described here or are there special algorithms for such problems?
ebonnafoux 5 hours ago [-]
Does someone, with some experience on this subject, has an opinion on the best solver with binding in Python for a begginer? The OP use Z3 but also mentionned MiniZinc and I heard about Google OR-Tools.
mzl 5 hours ago [-]
If you want to work in Python, I would either use OR-Tools which has a Python library or CPMpy which is a solver agnostic Python library using numpy style for modeling combinatorial optimization problems.
There is a Python package for MiniZinc, but it is aimed at structuring calls to MiniZinc, not as a modeling layer for MiniZinc.
Personally, I think it is better to start with constraint programming style modeling as it is more flexible, and then move to SMT style models like Z3 if or when it is needed. One of the reasons I think CP is a better initial choice is the focus on higher-level concepts and global constraints.
sevensor 22 minutes ago [-]
If you do want to start with SMT though, z3 has quite good bindings. I found it intuitive to use, and its ability to give you answers to very hard problems is like owning a magic wand.
sophacles 15 hours ago [-]
Constraint solvers are a cool bit of magic. The article underplays how hard it is to model problems for them, but when you do have a problem that you can shape into a sat problem... it feels like cheating.
ngruhn 14 hours ago [-]
I took a course on SMT solvers in uni. It's so cool! They're densely packed with all these diverse and clever algorithms. And there is still this classic engineering aspect: how to wire everything up, make it modular...
mikestorrent 14 hours ago [-]
If you're good at doing this, you should check out the D-Wave constrained quadratic model solvers - very exciting stuff in terms of the quality of solution it can get in a very short runtime on big problems.
mzl 5 hours ago [-]
Has there been any published example of where this solver outperforms a classical solver?
pkoird 15 hours ago [-]
Going one abstraction deeper, SAT solvers are black magic.
metadat 12 hours ago [-]
Yes, explaining the "why / how did the SAT solver produce this answer?" can be more challenging than explaining some machine learning model outputs.
You can literally watch as the excitement and faith of the execs happens when the issue of explainability arises, as blaming the solver is not sufficient to save their own hides. I've seen it hit a dead end at multiple $bigcos this way.
I understand that not giving the lower bounds effectively lets it find an arbitrarily low (generally very negative) number of coins that satisfy the problem (so the minimization would basically go to negative infinity).
But why would it respond with “to get 37 in the lowest number of coins I’ll give you 37 ones”?
Or is this kind of like “if the answer to your minimization problem is negative infinity, the answer is undefined behavior”?
It works in a "bottom-up" manner, first giving the minimization goal its lowest possible value and then it tries to "work its way up" to something satisfiable.
So in this case there's something like:
goal = c1 + c2 + c3 minimize goal
(even if you write minimize (c1 + c2 + c3), it's still creating that goal variable internally, so to speak)
It now wants to give goal the lowest possible value it can assign it, even if not satisfiable, to start the process. It looks at the bounds of c1, c2 and c3 to determine this, and they're all -infinity.
The lowest possible value of goal is -infinity, which the optimizer has no idea what to do with, and just basically gives up. When used as an application, Z3 will tell you something's wrong by spitting this out in the terminal:
(+ c1 c2 c3) |-> (* (- 1) oo)
A different optimization algorithm, like one that works "top-down" (find the first satisfiable assignment, then make it better) would actually be able to converge. I think OptiMathSAT would be able to handle it because it also does top-down reasoning for this.
CP solvers and MIP solvers all enforce variable bounds so they don't have this issue.
Yes, it picked a valid result and gave up.
I got a similar nonsensical result in my run-ins[1] in with SAT solvers too, until I added a lowerbound=0.
The "real life" follow-up question in interviews was how to minimize the total number of intermediate rows for a series of joins within in a cost-based optimizer.
[1] - https://gist.github.com/t3rmin4t0r/44d8e09e17495d1c24908fc0f...
Think about it purely in the set-theoretic sense "what is the minimal set containing 37 elements?" the answer is "the set containing 37 elements."
Either you misunderstood something or please explain more. Note that both the working and broken versions have the same 37 in them.
And the problem statement starts with no coins chosen. It had to actively choose pennies to get that broken result. If you told it about the coins in a different order, it probably would have given a different answer.
I used it to solve the new NYT game, Pips: https://kerrigan.dev/blog/nyt-pips
You'll be able to solve bigger problems with familiar/legible syntax and even use the code in production.
That’s how I understand it at least! Someone please correct me if I’m off base.
In general, for modeling layers embedded in programming languages, having operator overloading makes the code so much better to work with. Modeling layers where one has to use functions or strings that are evaluated are much harder to work with.
In particular if you've looked at linear algebra libraries that have two or three different multiplication operators (cross, dot, bitwise), if they commit to use functions from the start they end up faar more readable than any attempt to abuse operator overloading.
I prefer;
- DSL (domain specific language, especially for symbolic solving such as mathematica)
- Strings (That's just a DSL hiding in a parse() function, that can be called from a different language. Write the code as if its a DSL, but without pulling in a new tech stack)
- Functions, but preferable in a chain-able style. sym("a").add(1).eq(sym("b")).solve() is better than solve(eq(add(1,a),b))
I have seen operator overloading done right... but I've also cursed loudly at it done badly.
Rust in particular use up most symbols for other things (borrow checker takes alot of them, and "==" is forced to be consistent) so you end up with only "+ - * /" as overload-able. "+" and "-" being the least abused from my experience perhaps makes that a good compromise.
Is it worth having operator overloading in a language or not? That is a different question. People do bad things with every feature, including operator overloading. I'm slightly in favor of it, but I'm not a fan of using it for anything other than either immediate arithmetic or creating expressions trees representing arithmetic.
My understanding is that solvers are like regexes. They can easily get out of hand in runtime complexity. At least this is what I have experienced from iOS's AutoLayout solver
For the iOS AutoLayout, what kind of issues have you seen, and how complex were the problems?
There is a Python package for MiniZinc, but it is aimed at structuring calls to MiniZinc, not as a modeling layer for MiniZinc.
Personally, I think it is better to start with constraint programming style modeling as it is more flexible, and then move to SMT style models like Z3 if or when it is needed. One of the reasons I think CP is a better initial choice is the focus on higher-level concepts and global constraints.
You can literally watch as the excitement and faith of the execs happens when the issue of explainability arises, as blaming the solver is not sufficient to save their own hides. I've seen it hit a dead end at multiple $bigcos this way.
https://www.hakank.org/