Top
image credit: Unsplash

How Spotify Improved its LLM Chatbot in Sidekick

August 31, 2023

Via: InfoQ

While using a Large Language Model chatbot opens the door to innovative solutions, crafting the user experience so it is as natural as possible requires some specific effort, argues Spotify engineer Ates Goral, to prevent rendering jank and to reduce latency.

Streaming a Markdown response returned by the LLM leads to rendering jank due to the fact that special Markdown characters, like *, remain ambiguous until the full expression is received, e.g., until the closing * is received. The same problem applies to links and all other Mardown operators. This implies that Markdown expressions cannot be correctly rendered until they are complete, which means that for a short period of time Markdown rendering is not correct.

Read More on InfoQ