I did this months Jane Street Puzzle with LLM assistance (my name is on the list). It took just a smidgen more than 2 hours.
About 25% of my time was spent just getting Claude to actually understand how to view the table. Vision models still don't seem good enough to make sense of tabular data (especially as an image). I had to hint it that the puzzle was 13x13, and that the example puzzle at the bottom was 5x5. Once it understood that I had to explicitly tell it to use imagemagic to cut the puzzle into cell sized images so that it could more accurately do OCR on the actual cell.
I won't mention how to solve it as I am not sure about the etiquet on when it is ok to talk about these puzzles, maybe I will update this section at a later time.
Anyway, I am just floored that you can solve something like this in 2 hours even with the parsing difficulties. I am certain that this is a great deal faster than I would have been able to solve it if I had to write the code myself. It is really interesting to instrument a solver like this so that you have a sense of how it is doing.
In my particular case, it did not seem like Claude would have necessarily solved the puzzle on its own. With that said, although I could have written the algorithms it used to solve this, there is simply no way I could have implemented a solution in 2 hours. Frankly, it writes code much more quickly and often more accurately than I do.
The only real advantage I have over the LLM is the wisdom of what we should do or how we should go about it, it is however better than me at the actual application of my own knowledge. It is interesting that it is an intelligence that knows how to do the output of a specified task, but lacks the judgement to specify the task itself. I find that seeming contradiction remarkable, I think it may point to just how alien this intelligence is compared to human intellect.