Show HN: Keep Your Next Viral AI App Free for Longer with Local Embeddings

blog.fxn.ai

5 points by olokobayusuf 5 hours ago

Hey HN!

I'm the founder of Function, a platform that enables developers to run Python functions on-device. We've been quietly building for the past several months, but we figured we could launch some low-hanging fruit:

Function LLM patches your OpenAI client to generate embeddings on-device, both in the browser and in Node.js. The library itself is so tiny that it doesn't even need to be a standalone library. The more interesting piece is how it uses Function to generate embeddings on-device, fully cross-platform.

For the embedding model, we've partnered with Nomic AI to use their `nomic-embed-text-v1.5` model. We plan to add more embedding models before adding support for text generation with small LLMs.

At this point, you're probably wondering how we differ from the likes of Ollama. Unlike the other players in the space, we believe that the whole point of on-device AI is to push as much of your app's heavy computation to your users' devices--not to spin up a 'local' AI service still running on AWS or other cloud. This way, you save tons of cash by simply not having a hosted inference service; build users' trust with privacy by default; and have way less complicated serving infrastructure at scale.

Function handles much of the heavy lifting for you. We compile Python functions to run on Android, browser, iOS, macOS, Linux, and Windows. And we're exhaustive in using whatever hardware (GPU, NPU, etc) or ISA (CoreML, CUDA, etc) a particular device offers.

---

Relevant Links: - Function LLM on GitHub (please ): https://github.com/fxnai/fxn-llm-js - Document Retrieval Demo (fully on-device): https://fxn-llm-js.vercel.app/ - Function Docs: https://docs.fxn.ai/introduction - Function Waitlist (to write your own fxns): https://fxn.ai/waitlist