• 1 Post
  • 846 Comments
Joined 2 years ago
cake
Cake day: August 15th, 2023

help-circle
rss





  • In my experience, when using reasoning models, it can count, but not very consistently. I’ve tried random assortments of letters and it can count them correctly sometimes. It seems to have much harder time when the same letter repeats many times, perhaps because those are tokenized irregularly.



  • I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn’t even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn’t be expected.