Aqueduct's LLM support makes it easy for you to run open-source LLMs on any infrastructure that you use. With a single API call, you can run an LLM on a single prompt or even on a whole dataset!
Hi everyone! LLMs have taken the world by storm, but using them is a pain (or a non-starter) for most people, due to concerns around data privacy, IP ownership, and cost. Open-source LLMs, like LLaMa, Dolly, and Vicuna have enabled enterprises to think about using LLMs, but they're a pain to operate.
At Aqueduct, our goal has been to enable ML teams to use the best technology without the operational nightmare of running ML in the cloud, and we're super excited to share that Aqueduct now allows you to run open-source LLMs with a single API call.
➡️ Aqueduct's Python API allows you to call an LLM with a single line of code (see the first image above). No need to worry about installing drivers and library dependencies and debugging configuration parameters.
☁️ Aqueduct is designed to work with any infrastructure you use; you can run your LLMs on a large server or on a Kubernetes cluster. You can even have Aqueduct spin up a cluster for you.
🔁 You can publish your LLM-based workflows to run ad hoc or on a fixed schedule using Aqueduct.
💡 Aqueduct's visibility features extend naturally to LLMs, so you can see what parameters or prompts you used and how performance evolves over time.
We'd love to hear what you think! Check out our open-source project or join our Slack community.
GitHub: https://github.com/aqueducthq/aq...
Slack: https://slack.aqueducthq.com
It's amazing to see part of the team that has worked on Vicana and LMSys to also publish simple API to deploy and run these OSS LLM models in production. Excited to see everyone leverage this!
While most solutions prescribe a "rip & replace fork-lift" strategy, Aqueduct is refreshing in its philosophy of empowering and working with your existing best-in-class ML / LLM technology choices.
The next generation of AI is in all our hands; not behind superscalar moats. This launch lets us run our own LLMs, on prem or in a secure cloud. Aqueduct makes it easy, using infrastructure you already understand.
I am really excited to see how people actually use LLMs to solve real problems. How will you use LLMs?
"Aqueduct simplifies the deployment of open-source LLMs, making it easier to leverage their power for natural language processing tasks."
Finally, ML teams can leverage the power of LLMs with a single API call, saving time on installation and configuration headaches. And with the flexibility to run LLMs on any infrastructure, Aqueduct makes ML deployment a breeze. Kudos to the Aqueduct team for simplifying the ML journey!
About Aqueduct on Product Hunt
“The easiest way to run open source LLMs”
Aqueduct launched on Product Hunt on May 10th, 2023 and earned 107 upvotes and 9 comments, placing #17 on the daily leaderboard. Aqueduct's LLM support makes it easy for you to run open-source LLMs on any infrastructure that you use. With a single API call, you can run an LLM on a single prompt or even on a whole dataset!
Aqueduct was featured in Software Engineering (42.4k followers), Developer Tools (511.4k followers), Artificial Intelligence (466.8k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 180.7k products, making this a competitive space to launch in.
Who hunted Aqueduct?
Aqueduct was hunted by Vikram Sreekanti. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how Aqueduct stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.