This is a autopost bolg frinds we are trying to all latest sports,news,all new update provide for you
Thursday, April 23, 2026
Show HN: Agent Vault – A HTTP credential proxy and vault for AI agents https://ift.tt/zCrwKIH
Show HN: Agent Vault – A HTTP credential proxy and vault for AI agents https://ift.tt/mNbvEqk April 22, 2026 at 09:55PM
Show HN: AgentSearch – Self-hosted search and MCP for AI agents, no API keys https://ift.tt/PWSFz2m
Show HN: AgentSearch – Self-hosted search and MCP for AI agents, no API keys https://ift.tt/1RdrNAw April 23, 2026 at 11:55PM
Show HN: Turning a Gaussian Splat into a videogame https://ift.tt/k40HC2X
Show HN: Turning a Gaussian Splat into a videogame https://ift.tt/eNdAS4P April 23, 2026 at 07:48PM
Wednesday, April 22, 2026
Show HN: One ESLint rule to kill the "ChatGPT em dash" in your codebase https://ift.tt/e2ibl0f
Show HN: One ESLint rule to kill the "ChatGPT em dash" in your codebase https://ift.tt/ALx4DZX April 23, 2026 at 01:27AM
Show HN: Netlify for Agents https://ift.tt/vcZf6wF
Show HN: Netlify for Agents I launched Netlify with a Show HN more than 11 years today, for humans. Today we're launching our Agent first version of Netlify. Super early days for this, but I expect it to become as important as our original launch over time. It's as hard to perfect these flows as it was to perfect some of the initial human DX flows, since the agents are non-deterministic and keeps changing and evolving, and we'll have more to show soon on our eval tooling for this. Try it out with an agent, and we would love feedback on what works and what doesn't as we keep iterating on making Netlify better for our new agent friends. https://netlify.ai April 22, 2026 at 10:27PM
Tuesday, April 21, 2026
Show HN: Agent Brain Trust, customisable expert panels for AI agents https://ift.tt/UuIovi4
Show HN: Agent Brain Trust, customisable expert panels for AI agents Agent Brain Trust lets you summon a panel of real, named experts to critique your architecture, review your writing, pressure your product strategy, or debate your design patterns. 10 built-in trusts, an extensible roster, and a working turn-taking protocol that ensures nothing useful gets skipped. Guest experts are drafted via an MCP server that maps topics to real persona cards so the panel can reach into niche and novel territory without inventing expertise it does not have. Wrote up the full thinking here: https://tinyurl.com/agent-brain-trust https://ift.tt/ELmYfTX April 22, 2026 at 04:33AM
Show HN: Almanac MCP, turn Claude Code into a Deep Research agent https://ift.tt/g9lw6WQ
Show HN: Almanac MCP, turn Claude Code into a Deep Research agent I am Rohan, and I have grown really frustrated with CC's search and read tools. They use Haiku to summarise all the search results, so it is really slow and often ends up being very lossy. I built this MCP that you can install into your coding agents so they can actually access the web properly. Right now it can: - search the general web - search Reddit - read and scrape basically any webpage Install it: npx openalmanac setup The MCP is completely free to use. We have also built a central store where you can contribute things you learned while exploring. If you find something useful, you can contribute it to the encyclopedia we're building at Almanac using the same MCP. https://ift.tt/wNWkHb4 April 22, 2026 at 03:42AM
Show HN: Backlit Keyboard API for Python https://ift.tt/Cj5Y0wT
Show HN: Backlit Keyboard API for Python It currently supports Linux as of now. You can use this package to tinker with many things. Let's say, if you want to make a custom notification system, like if your website is down, you can make a blink notification with it. MacOS support is underway. I haven't tested Windows yet, I don't use it anymore btw. In future, if this package reaches nice growth, I'll be happy to make a similar Rust crate for it. https://ift.tt/WfKFrC7 April 19, 2026 at 12:22PM
Monday, April 20, 2026
Show HN: Simple CLI tool to convert PDFs to dark mode, with TOC preservation https://ift.tt/zUGjxSl
Show HN: Simple CLI tool to convert PDFs to dark mode, with TOC preservation Hi HN, I made a little something that could be useful to those like me that read pdfs at night. https://ift.tt/Rp05mIs April 21, 2026 at 01:52AM
Show HN: Git Push No-Mistakes https://ift.tt/TBa0DRN
Show HN: Git Push No-Mistakes no-mistakes is how I kill AI slop. It puts a local git proxy in front of my real remote. I push to no-mistakes instead of origin, and it spins up a disposable worktree, runs my coding agent as a validation pipeline, forwards upstream only after every check passes, opens a clean PR automatically, and babysits CI pipeline for me. https://ift.tt/0imugwr April 21, 2026 at 12:10AM
Show HN: AI Coding Agent Guardrails enforced at runtime https://ift.tt/OmHtoIU
Show HN: AI Coding Agent Guardrails enforced at runtime Hello, looking for some users interested using a devtool that allows developers to centrally manage AI Coding Agent tools that supports all AI Coding Agent tools like Claude Code, Codex, Antigravity, etc. Try it free! https://ift.tt/k8ZKD4F... https://sigmashake.com April 20, 2026 at 10:55PM
Sunday, April 19, 2026
Show HN: How context engineering works, a runnable reference https://ift.tt/Fw3ufSy
Show HN: How context engineering works, a runnable reference I've been presenting at local meetups about Context Engineering, RAG, Skills, etc.. I even have a vbrownbag coming up on LinkedIn about this topic so I figured I would make a basic example that uses bedrock so I can use it in my talks or vbrownbags. Hopefully it's useful. https://ift.tt/VhCs8ta April 17, 2026 at 11:50PM
Show HN: Newsmaps.io a map of how news topics are covered by different countries https://ift.tt/OXlukeE
Show HN: Newsmaps.io a map of how news topics are covered by different countries https://ift.tt/lFzq3rB April 20, 2026 at 02:32AM
Show HN: A privacy-first, local-LLM note app for iOS (Google Keep alternative) https://ift.tt/zyYlo6A
Show HN: A privacy-first, local-LLM note app for iOS (Google Keep alternative) https://ift.tt/1FLPVZk April 19, 2026 at 10:29PM
Show HN: Free PDF redactor that runs client-side https://ift.tt/YEPwLih
Show HN: Free PDF redactor that runs client-side I recently needed to verify past employment and to do so I was going to upload paystubs from a previous employer, however I didn't want to share my salary in that role. I did a quick search online and most sites required sign-up or weren't clear about document privacy. I conceded and signed up for a free trial of Adobe Acrobat so I could use their PDF redaction feature. I figured there should be a dead simple way of doing this that's private, so I decided to create it myself. What this does is rasterize each page to an image with your redactions burned in, then it rebuilds the PDF so the text layer is permanently destroyed and not just covered up and easily retrievable. I welcome any and all feedback as this is my first live tool, thanks! https://redactpdf.net April 20, 2026 at 12:09AM
Saturday, April 18, 2026
Show HN: AI Subroutines – Run automation scripts inside your browser tab https://ift.tt/1oVpPrw
Show HN: AI Subroutines – Run automation scripts inside your browser tab We built AI Subroutines in rtrvr.ai. Record a browser task once, save it as a callable tool, replay it at: zero token cost, zero LLM inference delay, and zero mistakes. The subroutine itself is a deterministic script composed of discovered network calls hitting the site's backend as well as page interactions like click/type/find. The key architectural decision: the script executes inside the webpage itself, not through a proxy, not in a headless worker, not out of process. The script dispatches requests from the tab's execution context, so auth, CSRF, TLS session, and signed headers get added to all requests and propagate for free. No certificate installation, no TLS fingerprint modification, no separate auth stack to maintain. During recording, the extension intercepts network requests (MAIN-world fetch/XHR patch + webRequest fallback). We score and trim ~300 requests down to ~5 based on method, timing relative to DOM events, and origin. Volatile GraphQL operation IDs are detected and force a DOM-only fallback before they break silently on the next run. The generated code combines network calls with DOM actions (click, type, find) in the same function via an rtrvr.* helper namespace. Point the agent at a spreadsheet of 500 rows and with just one LLM call parameters are assigned and 500 Subroutines kicked off. Key use cases: - record sending IG DM, then have reusable and callable routine to send DMs at zero token cost - create routine getting latest products in site catalog, call it to get thousands of products via direct graphql queries - setup routine to file EHR form based on parameters to the tool, AI infers parameters from current page context and calls tool - reuse routine daily to sync outbound messages on LinkedIn/Slack/Gmail to a CRM using a MCP server We see the fundamental reason that browser agents haven't taken off is that for repetitive tasks going through the inference loop is unnecessary. Better to just record once, and get the LLM to generate a script leveraging all the possible ways to interact with a site and the wider web like directly calling backed API's, interacting with the DOM, and calling 3P tools/APIs/MCP servers. https://ift.tt/XtmCZne April 18, 2026 at 02:33AM
Show HN: Praxis – Lab data to publication-ready figures in one Python package https://ift.tt/DB2RvWE
Show HN: Praxis – Lab data to publication-ready figures in one Python package https://ift.tt/GJqnglu April 18, 2026 at 11:45PM
Friday, April 17, 2026
Show HN: Pyra – a Python toolchain experiment inspired by uv and Bun https://ift.tt/DYZj17t
Show HN: Pyra – a Python toolchain experiment inspired by uv and Bun I’ve been working on Pyra for the past few months and wanted to start sharing it in public. Right now it’s focused on the core package/project management workflow: Python installs, init, add/remove, lockfiles, env sync, and running commands in the managed env. The bigger thing I’m exploring is whether Python could eventually support a more cohesive toolchain story overall, more in the direction of Bun: not just packaging, but maybe over time testing, tasks, notebooks, and other common workflow tools feeling like one system instead of a bunch of separate pieces. It’s still early, and I’m definitely not claiming it’s as mature as uv. I’m mostly sharing it now because I want honest feedback on whether the direction feels interesting or misguided. https://ift.tt/81YnRZ9 April 18, 2026 at 03:20AM
Show HN: I turned my MacBook notch into a live Claude Code dashboard https://ift.tt/RDJtFzP
Show HN: I turned my MacBook notch into a live Claude Code dashboard https://ift.tt/Yhg9saf April 17, 2026 at 09:13PM
Show HN: Waputer – The WebAssembly Computer https://ift.tt/Yy02Spv
Show HN: Waputer – The WebAssembly Computer Waputer is an operating system that runs entirely in the browser. When you visit the website at https://waputer.app , a kernel written in JavaScript sets up a filesystem and launches a WebAssembly program, which in turn talks to the kernel to handle the display and input. A purely terminal-based version is at https://waputer.dev . My original intention was to create programs that run in the browser that have a lot more in common with the desktop. The traditional "hello world" program is not really suited for the web. Waputer changes that. The GitHub repo at https://ift.tt/TQqLWZ5 gives a very brief overview of compiling a C program and running it on Waputer. There is a blog available from the main site that has a long-form explanation of Waputer and my motivations if you want some additional reading. https://waputer.app April 17, 2026 at 11:16PM
Thursday, April 16, 2026
Show HN: Spice simulation → oscilloscope → verification with Claude Code https://ift.tt/ZDuwzh0
Show HN: Spice simulation → oscilloscope → verification with Claude Code I built MCP servers for my oscilloscope and SPICE simulator so Claude Code can close the loop between simulation and real hardware. https://ift.tt/0Y6KFcO April 17, 2026 at 06:07AM
Show HN: Marky – A lightweight Markdown viewer for agentic coding https://ift.tt/2BL3dqv
Show HN: Marky – A lightweight Markdown viewer for agentic coding Hey HN, In this age of agentic coding I've found myself spending a lot of time reviewing markdown files. Whether it's plans or documentation that I've asked my agent to generate for me, it seems that I spend more time reading markdown than code. I've tried a few different solutions to make it easier to read such as Obsidian however I've found their Vault system to be quite limiting for this use case and I've found TUI solutions to not quite be as friendly to read as I've wanted so I made Marky. Marky is a lightweight desktop application that makes it incredibly easy to read and track your markdown files. It also has a helpful cli so you can just run marky FILENAME and have the app open to the md file that you pointed it at. I've been using the daily over the past week and I really enjoy it so I figured I'd share it. Here's a video if you want to check out a demo: https://www.youtube.com/watch?v=nGBxt8uOVjc . I have plans to add more features such as incorporating agentic tools such as claude code and codex into the UI as well as developing a local git diff reviewer to allow me to do local code review before pushing up to git. I'd love to hear your thoughts and any feature suggestions you may have :) https://ift.tt/RD8eOUC April 16, 2026 at 09:38PM
Show HN: Online Sound Decibel Meter https://ift.tt/qIUM8mD
Show HN: Online Sound Decibel Meter https://ift.tt/myRFVQT April 17, 2026 at 12:09AM
Wednesday, April 15, 2026
Show HN: I built a Wikipedia based AI deduction game https://ift.tt/VtJQm5L
Show HN: I built a Wikipedia based AI deduction game I haven't seen anything like this so I decided to build it in a weekend. How it works: You see a bunch of things pulled from Wikipedia displayed on cards. You ask yes or no questions to figure out which card is the secret article. The AI model has access to the image and wiki text and it's own knowledge to answer your question. Happy to have my credits burned for the day but I'll probably have to make this paid at some point so enjoy. I found it's not easy to get cheap+fast+good responses but the tech is getting there. Most of the prompts are running through Groq infra or hitting a cache keyed by a normalization of the prompt. https://ift.tt/qKZFuv5 April 16, 2026 at 05:43AM
Show HN: US keyboards don't have enough keys, so I switched to Japanese https://ift.tt/Buh9x20
Show HN: US keyboards don't have enough keys, so I switched to Japanese https://ift.tt/wTMn6Xt April 16, 2026 at 02:27AM
Show HN: Jeeves – TUI for browsing and resuming AI agent sessions https://ift.tt/90Tv64P
Show HN: Jeeves – TUI for browsing and resuming AI agent sessions I made Jeeves to search, preview, read through, and resume AI agent sessions in your terminal. It shows sessions across claude and codex in a single view, with more AI agent framework integrations to come. https://ift.tt/rGwFmVg April 16, 2026 at 01:01AM
Show HN: Monadic Networking Library for Go https://ift.tt/S6fPoW2
Show HN: Monadic Networking Library for Go A library built on top of ibm/fp-go for use in networking applications (servers, etc.) https://ift.tt/jaRmipd April 15, 2026 at 11:37PM
Tuesday, April 14, 2026
Show HN: Uninum – All elementary functions from a single operator, in Python https://ift.tt/ZxCkcL9
Show HN: Uninum – All elementary functions from a single operator, in Python https://ift.tt/NFOTLda April 15, 2026 at 03:16AM
Show HN: Run Python tools on rust agents https://ift.tt/tZWP0DH
Show HN: Run Python tools on rust agents Over at Tools-rs, we wanted to script tools faster with the help of large communities. The interest arose to build a way to bridge our Rust LLM runtimes together with more traditional scripting languages, so we decided to find a way to bring Python tools into our ecosystem. Hence, we're introducing our first FFI on Python (powered by PyO3)! Calling a Python tool is as easy as writing a decorator in the Python function and then passing the script's (or folder) path to the tool collection builder. They get serialized as JSON objects so they're fully observable by the AI, and you can call them directly from Rust. https://ift.tt/ezVSZqm April 15, 2026 at 02:01AM
Show HN: Sk.illmd.com, a forum for talking about and showing off agent skills https://ift.tt/5DRFYfu
Show HN: Sk.illmd.com, a forum for talking about and showing off agent skills https://ift.tt/CqwSV7U April 15, 2026 at 01:07AM
Monday, April 13, 2026
Show HN: Encrypted, nothing stored, nothing repeated face-gated asset sharing https://ift.tt/GoC4KYD
Show HN: Encrypted, nothing stored, nothing repeated face-gated asset sharing https://veylt.net/ April 13, 2026 at 11:40PM
Show HN: pg_grpc – Call gRPC services directly from PostgreSQL https://ift.tt/1VuxwPo
Show HN: pg_grpc – Call gRPC services directly from PostgreSQL https://ift.tt/30qKloU April 13, 2026 at 11:20PM
Sunday, April 12, 2026
Show HN: Stork – MCP server so Claude/Cursor can search 14k MCP servers AI tools https://ift.tt/oIzTPpM
Show HN: Stork – MCP server so Claude/Cursor can search 14k MCP servers AI tools https://www.stork.ai April 13, 2026 at 01:19AM
Show HN: A social feed with no strangers https://ift.tt/QWhyGVM
Show HN: A social feed with no strangers Grateful is a gratitude app with a simple social layer. You write a short entry, keep it private or share it to a circle. A circle is a small private group of your own making — family, close friends, whoever you'd actually want to hear from. It shows you the most recent post first. People in the circle can react or leave a comment. There's also a daily notification that sends you something you were grateful for in the past. Try it out on both iOS and Android. Go to grateful.so https://ift.tt/LOgZpkn April 13, 2026 at 04:11AM
Show HN: Rekal – Long-term memory for LLMs in a single SQLite file https://ift.tt/yW7kj5x
Show HN: Rekal – Long-term memory for LLMs in a single SQLite file I got tired of repeating myself to my LLM every session. rekal is an MCP server that stores memories in SQLite and retrieves them with hybrid search (BM25 + vectors + recency decay). One file, local embeddings, no API keys. https://ift.tt/slyefGr April 13, 2026 at 02:55AM
Saturday, April 11, 2026
Show HN: Bitcoin and Quantum Computing – a three-part research series https://ift.tt/6QyTJq7
Show HN: Bitcoin and Quantum Computing – a three-part research series https://bitcoinquantum.space April 12, 2026 at 12:47AM
Show HN: A living Vancouver. Connor is walking dogs at the SPCA this morning https://ift.tt/Wd4JvE2
Show HN: A living Vancouver. Connor is walking dogs at the SPCA this morning I've spent most of my career in marketing, which for the last few years has meant building consumer personas for campaigns. I wanted to see if I could make these real, living in real neighborhoods, had real weather, real budgets, real Saturday lunches. I always wanted to build a world, not a segment. This is that. 140 people so far, split across Vancouver (100), San Francisco (20), and Tokyo (20). Each one is about 1,000 lines of profile — family, finances, daily schedule, health, worldview, media diet, the channels you'd actually reach them through and the ones that will explicitly never work on them. Demographics are census-grounded income, age, ethnicity, household composition follow normal distributions against StatsCan, ACS, and Japanese e-Stat data, so the panel is roughly representative of the city instead of representative of whatever's overrepresented in an LLM's training corpus. The specific details come from real stories. They live in real local time on a live map. Right now it's Saturday 11:32 AM in Vancouver. Connor Hughes, a 31-year-old software developer at Clio in Gastown, is on his SPCA volunteer shift, he walks shelter dogs at the Boundary Road location every other Saturday morning. Hassan Khoury is in the morning lunch rush with Tony at his Lebanese café — it's his busiest day of the week. Ahmad Noori is pulling Saturday overtime on a construction site. Jordan Whitehorse is on mid-shift at East Cafe on Hastings. Every day is unique, no two days repeat. A 3 AM job fetches live data: weather from Open-Meteo, grocery CPI from StatsCan food vectors, Metro Vancouver transit delays from Google Routes API against specific corridors, Vancouver gas prices, sunrise and sunset. Each persona has a modifier file that reacts to all of it. When Vancouver gas hits $1.85/L, Jaspreet the long-haul trucker's Coquihalla run to Calgary stops feeling worth it, his margins are thin, his mood takes a hit. When food CPI spikes, Gurinder at the Amazon warehouse stops buying the $9 Subway and brings roti from home. A health flare rolls probabilistically each morning which maybe nothing, maybe Tanya's six month old had a rough night, maybe Frank's back is acting up. The days stack up and get remembered. Every persona has a journal, today's entry in a markdown file, a week of them compressed into a "dream" of ~30 lines that keeps the shape without the texture, a month compressed into ~15 lines. It's their journal. I'm not writing it; the simulation is. Click any persona to open their detail, or hit "Talk to [name]" to have a conversation and they run on Claude Haiku with their full profile and recent diary entries as context. Not a product, not a startup, just a thing I've been quietly working on. They feel, in a way I didn't expect, like my fully grown kids. Happy to answer questions. https://brasilia-phi.vercel.app April 12, 2026 at 12:12AM
Show HN: We scanned uscis.gov for third-party trackers. The results are jarring https://ift.tt/FhMTZIQ
Show HN: We scanned uscis.gov for third-party trackers. The results are jarring https://ift.tt/g4FqmCp April 11, 2026 at 07:13PM
Friday, April 10, 2026
Show HN: Eve – Managed OpenClaw for work https://ift.tt/D0ipyWP
Show HN: Eve – Managed OpenClaw for work Eve is an AI agent harness that runs in an isolated Linux sandbox (2 vCPUs, 4GB RAM, 10GB disk) with a real filesystem, headless Chromium, code execution, and connectors to 1000+ services. You give it a task and it works in the background until it's done. I built this because I wanted OpenClaw without the self-hosting, pointed at actual day-to-day work. I’m thinking less personal assistant and more helpful colleague. Here’s a short demo video: https://ift.tt/qKz2AJc The main interface is a web app where you can watch work happen in real time (agents spawning, files being written, use of the CLI). There's also an iMessage integration so you can fire a task asynchronously, put your phone down, and get a reply when it's finished. Under the hood, there's an orchestrator (Claude Opus 4.6) that routes to the right domain-specific model for each subtask: browsing, coding, research, and media generation. For complex tasks it spins up parallel sub-agents that coordinate through the shared filesystem. They have persistent memory across sessions so context compounds over time. I’ve packaged it with a bunch of pre-installed skills so it can execute in a variety of job roles (sales, marketing, finance) at runtime. Here are a few things Eve has helped me with in the last couple days: - Edit this demo video with a voice over of Garry: https://www.youtube.com/watch?v=S4oD7H3cAQ0 - Do my tax returns - To build HN as if it was the year 2030: https://ift.tt/94RiUF3 AMA on the architecture and lmk your thoughts :) P.S. I've given every new user $100 worth of credits to try it. https://eve.new/login April 10, 2026 at 11:01PM
Show HN: FluidCAD – Parametric CAD with JavaScript https://ift.tt/nk9w8vT
Show HN: FluidCAD – Parametric CAD with JavaScript Hello HN users, This is a CAD by code project I have been working on on my free time for more than year now. I built it with 3 goals in mind: - It should be familiar to CAD designers who have used other programs. Same workflow, same terminology. - Reduce the mental effort required to create models as much as possible. This is achieved by: - Provide live rendering and visual guidance as you type. - Allow the user to reference existing edges/faces on the scene instead of having to calculate everything. - Provide interactive mouse helpers for features that are hard to write by code: Only 3 interactive modes for now: Edge trimming, Sketch region extrude, Bezier curve drawing. - Implicit coding whenever possible: e.g: There are sensible defaults for most parameters. The program will automatically fuse intersecting objects together so you do not have to worry about what object needs to be fused with what. - It should be reasonably fast: The scene objects are cached and only the updated objects are re-computed. I think I have achieved these goals to a good extent. The program is still in early stages and there are many features I want to add, rewrite but I think it is already usable for simple models. https://fluidcad.io/ April 11, 2026 at 12:09AM
Thursday, April 9, 2026
Show HN: Last Year I wrote a (Sci)fictional story where the EFF was a player [pdf] https://ift.tt/cfpUb0n
Show HN: Last Year I wrote a (Sci)fictional story where the EFF was a player [pdf] https://ift.tt/qEvI9pg April 9, 2026 at 11:43PM
Show HN: Logoshi, a brand kit generator for solo founders https://ift.tt/mSB3aDv
Show HN: Logoshi, a brand kit generator for solo founders https://logoshi.com/ April 9, 2026 at 10:12PM
Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct https://ift.tt/6Q4GeUW
Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct Fully open source, a hard fork of cline. Full evals on the github page that compares 7 agents (Cline, Kilo, Ohmypi, Opencode, Pimono, Roo, Dirac) on 8 medium complexity tasks. Each task, each diff and correctness + cost info on the github Dirac is 64.8% cheaper than the average of the other 6. https://ift.tt/nzZ7pXa April 9, 2026 at 05:36PM
Show HN: Homebutler – I manage my homelab from chat. AI never gets raw shell https://ift.tt/JyI6DT4
Show HN: Homebutler – I manage my homelab from chat. AI never gets raw shell https://homebutler.dev April 9, 2026 at 05:39PM
Show HN: CSS Studio. Design by hand, code by agent https://ift.tt/LlzXoE7
Show HN: CSS Studio. Design by hand, code by agent Hi HN! I've just released CSS Studio, a design tool that lives on your site, runs on your browser, sends updates to your existing AI agent, which edits any codebase. You can actually play around with the latest version directly on the site. Technically, the way this works is you view your site in dev mode and start editing it. In your agent, you can run /studio which then polls (or uses Claude Channels) an MCP server. Changes are streamed as JSON via the MCP, along with some viewport and URL information, and the skill has some instructions on how best to implement them. It contains a lot of the tools you'd expect from a visual editing tool, like text editing, styles and an animation timeline editor. https://cssstudio.ai April 9, 2026 at 04:53PM
Show HN: Moon simulator game, ray-casting https://ift.tt/Zzgnm2a
Show HN: Moon simulator game, ray-casting Did this a few years ago. Seems apropos. Sources and more here: https://ift.tt/pJcOWBw https://ift.tt/Y0rCkLz April 6, 2026 at 10:39PM
Wednesday, April 8, 2026
Show HN: Skrun – Deploy any agent skill as an API https://ift.tt/y6HZWs3
Show HN: Skrun – Deploy any agent skill as an API https://ift.tt/s9AKyF8 April 8, 2026 at 06:04PM
Show HN: 500k+ events/sec transformations for ClickHouse ingestion https://ift.tt/1cDuIan
Show HN: 500k+ events/sec transformations for ClickHouse ingestion Hi HN! We are Ashish and Armend, founders of GlassFlow. Over the last year, we worked with teams running high-throughput pipelines into self-hosted ClickHouse. Mostly for observability and real-time analytics. A question that came repeatedly was: What happens when throughput grows? Usually, things work fine at 10k events/sec, but we started seeing backpressure and errors at >100k. When the throughput per pipeline stops scaling, then adding more CPU/memory doesn’t help because often parts of the pipeline are not parallelized or are bottlenecked by state handling. At this point, engineers usually scale by adding more pipeline instances. That works but comes with some trade-offs: - You have to split the workload (e.g., multiple pipelines reading from the same source) - Transformation logic gets duplicated across pipelines - Stateful logic becomes harder to manage and keep consistent - Debugging and changes get more difficult because the data flow is fragmented Another challenge arises when working with high-cardinality keys like user IDs, session IDs, or request IDs, and when you need to handle longer time windows (24h or more). The state grows quickly and many systems rely on in-memory state, which makes it expensive and harder to recover from failures. We wanted to solve this problem and rebuild our approach at GlassFlow. Instead of scaling by adding more pipelines, we scale within a single pipeline by using replicas. Each replica consumes, processes, and writes independently, and the workload is distributed across them. In the benchmarks we’re sharing, this scales to 500k+ events/sec while still running stateful transformations and writing into ClickHouse. A few things we think are interesting: - Scaling is close to linear as you add replicas - Works with stateful transformations (not just stateless ingestion) - State is backed by a file-based KV store instead of relying purely on memory - The ClickHouse sink is optimized for batching to avoid small inserts - The product is built with Go Full write-up + benchmarks: https://ift.tt/ol5djf9... Repo: https://ift.tt/BCG9pDw Happy to answer questions about the design or trade-offs. https://ift.tt/BCG9pDw April 8, 2026 at 10:56PM
Tuesday, April 7, 2026
Show HN: Unicode Steganography https://ift.tt/KN4i0s2
Show HN: Unicode Steganography I built a demo of two Unicode steganography techniques, zero-width characters and homoglyph substitution, in the context of AI misalignment. The first is about the use of two invisible zero-width characters (ZWS and ZWNJ) to binary encode text. The second is much cooler. Most characters in the Latin and Cyrillic alphabets look nearly identical, but have different unicode. If you have text to encode and convert it into binary representation (1s and 0s), you could take plain english "carrier" text and for each 1 in the binary representation you could substitute the Cyrillic letter equivalent. Decoding the message requires traversing the text and seeing where Cyrillic letters could have been substituted but weren't, and where they were, leading to 0s and 1s respectively, which can be built back into your original hidden text. In both cases, these are detectable, but the interesting problem for me is whether an LLM could eventually invent an encoding that goes unnoticed by both us, and automated detection. If LLMs were able to covertly include messages in plaintext, misaligned AI Agents could eventually communicate across MCP/A2A and individual chat session boundaries undetected. A deceptive LLM might seem helpful, but work against your goals. It could tell other agents it interacts with over MCP/A2A to help it discreetly fail, signal intent, and avoid tripping oversight/safety mechanisms. Furthermore, oversight mechanisms become more difficult to implement if we can't believe our own eyes. https://ift.tt/d0hXRoK April 7, 2026 at 06:27PM
Show HN: Marimo pair – Reactive Python notebooks as environments for agents https://ift.tt/F8LdsG3
Show HN: Marimo pair – Reactive Python notebooks as environments for agents Hi HN! We're excited to share marimo pair [1] [2], a toolkit that drops AI agents into a running marimo notebook [3] session. This lets agents use marimo as working memory and a reactive Python runtime, while also making it easy for humans and agents to collaborate on computational research and data work. GitHub repo: https://ift.tt/zL9cIKD Demo: https://www.youtube.com/watch?v=6uaqtchDnoc marimo pair is implemented as an agent skill. Connect your agent of choice to a running notebook with: /marimo-pair pair with me on my_notebook.py The agent can do anything a human can do with marimo and more. For example, it can obtain feedback by running code in an ephemeral scratchpad (inspect variables, run code against the program state, read outputs). If it wants to persist state, the agent can add cells, delete them, and install packages (marimo records these actions in the associated notebook, which is just a Python file). The agent can even manipulate marimo's user interface — for fun, try asking your agent to greet you from within a pair session. The agent effects all actions by running Python code in the marimo kernel. Under the hood, the marimo pair skill explains how to discover and create marimo sessions, and how to control them using a semi-private interface we call code mode. Code mode lets models treat marimo as a REPL that extends their context windows, similar to recursive language models (RLMs). But unlike traditional REPLs, the marimo "REPL" incrementally builds a reproducible Python program, because marimo notebooks are dataflow graphs with well-defined execution semantics. As it uses code mode, the agent is kept on track by marimo's guardrails, which include the elimination of hidden state: run a cell and dependent cells are run automatically, delete a cell and its variables are scrubbed from memory. By giving models full control over a stateful reactive programming environment, rather than a collection of ephemeral scripts, marimo pair makes agents active participants in research and data work. In our early experimentation [4], we've found that marimo pair accelerates data exploration, makes it easy to steer agents while testing research hypotheses, and can serve as a backend for RLMs, yielding a notebook as an executable trace of how the model answered a query. We even use marimo pair to find and fix bugs in itself and marimo [5]. In these examples the notebook is not only a computational substrate but also a canvas for collaboration between humans and agents, and an executable, literate artifact comprised of prose, code, and visuals. marimo pair is early and experimental. We would love your thoughts. [1] https://ift.tt/zL9cIKD [2] https://ift.tt/JXcG5tK [3] https://ift.tt/kyulF0b [4] https://www.youtube.com/watch?v=VKvjPJeNRPk [5] https://ift.tt/JjdLoQC... https://ift.tt/zL9cIKD April 7, 2026 at 11:17PM
Show HN: C64 Ultimate Toolbox for macOS https://ift.tt/GnPqvRf
Show HN: C64 Ultimate Toolbox for macOS My wife got me a Commodore 64 Ultimate ( https://ift.tt/dC6pPVj ) for my birthday, and it became an obvious hassle to have to keep an entire monitor connected to it just to tinker with it. When I found out the Ultimate FPGA board has built-in support for streaming the video and audio data over the network, as well as a REST API allowing for file and configuration management, I set to work on an app to remotely control my new device. - View and hear your Commodore 64 Ultimate or Ultimate 64 device over the network, with a fully configurable CRT shader so you can dial in just the right retro feel. - View and manage files on your device, including support for drag and drop folder/file upload, as well as the ability to run and mount disks, create new disk images, and more. - BASIC Scratchpad is a mini-IDE in the app where you can write BASIC apps and send them directly to any of your connected devices to run. - Keyboard forwarding allows you to interact with your device with your computer keyboard, includes a keyboard overlay for Commodore specific keys your keyboard definitely doesn't have. - Visual memory viewer and editor, along with a terminal-like memory viewer and editor for debugging and tinkering. - Built-in support for recording videos and taking screenshots cleanly. - Fully native macOS AppKit app. Here's a rough and ready demo video I recorded and sent to App Review for the 2.0 release which was approved yesterday: https://www.youtube.com/watch?v=_2wJO2wOGm8 Please note again this app only works with Commodore 64 Ultimate or Gideon's Ultimate 64 devices. Ultimate II does not have the data streams feature to power the display. https://ift.tt/HfAn8Y0 April 7, 2026 at 10:09PM
Monday, April 6, 2026
Show HN: Meta-agent: self-improving agent harnesses from live traces https://ift.tt/EAjXO24
Show HN: Meta-agent: self-improving agent harnesses from live traces We built meta-agent: an open-source library that automatically and continuously improves agent harnesses from production traces. Point it at an existing agent, a stream of unlabeled production traces, and a small labeled holdout set. An LLM judge scores unlabeled production traces as they stream. A proposer reads failed traces and writes one targeted harness update at a time, such as changes to prompts, hooks, tools, or subagents. The update is kept only if it improves holdout accuracy. On tau-bench v3 airline, meta-agent improved holdout accuracy from 67% to 87%. We open-sourced meta-agent. It currently supports Claude Agent SDK, with more frameworks coming soon. Try it here: https://ift.tt/v8D0M3n https://ift.tt/v8D0M3n April 7, 2026 at 12:52AM
Show HN: ComputeLock – Insurance to reduce unpredictable compute spend https://ift.tt/UuNIZPb
Show HN: ComputeLock – Insurance to reduce unpredictable compute spend Reserved instances save money... until utilization changes, and you’re still paying. With ComputeLock, the risk of on-demand price spikes doesn’t exist - we offer burst insurance. 1. Send us an estimate of on-demand spend you expect and from what provider. 2. We confirm the maximum we'll cover for you for a small fee, and you get it in writing. 3. If on-demand prices spike, we'll reimburse you. We plan to work with smaller developers to start. How we do this is by monitoring supply and demand for compute. Of course, we'll get it wrong sometimes. But it's like insurance, you'll only need it when you NEED it. Would love to hear your feedback: https://ift.tt/ohfT3wK https://ift.tt/ohfT3wK April 6, 2026 at 10:53PM
Sunday, April 5, 2026
Show HN: I built a tool to show how much ARR you lose to FX fees https://ift.tt/8zQPyqr
Show HN: I built a tool to show how much ARR you lose to FX fees Hey HN, I started my career as a finance manager, transitioned into product management, and now I’m building my own products. Back in my finance days, while managing a £6M budget, I uncovered a £15k leak hiding in plain sight: FX fees. Today, I see solo founders making the exact same mistake. I realised most founders are quietly losing 2-5% of their revenue to what I call the Lazy Tax: - Stripe's ~2% auto-conversion fee on inbound revenue, - plus their local bank's ~3% spread when paying for global SaaS tools (AWS, Claude, Ads). So I built FixMyFX to show founders their exact leak and how to fix it (using multi-currency accounts to achieve a zero FX leak setup). Initially, I had Claude build this in React. Realised a simple calculator shouldn't need a 150kb payload and a complex build process. Threw the React code away and rebuilt it as a single lightweight HTML file using Alpine.js and Tailwind. It's completely free and ungated. I hope it helps you keep a bit more of your hard-earned revenue. Would love your feedback. Tania https://fixmyfx.com April 5, 2026 at 11:41PM
Show HN: A Dad Joke Website https://ift.tt/g8lTbwB
Show HN: A Dad Joke Website A dad joke website where you can rate random dad jokes, 1-5 groans. Sourced from 4 different places, all cited, all categorized, and ranked by top voted. Help me create the worlds best dadabase! https://joshkurz.net/ April 5, 2026 at 11:24PM
Saturday, April 4, 2026
Show HN: Vibooks – Local-first bookkeeping software built for AI agents https://ift.tt/MjUZCh2
Show HN: Vibooks – Local-first bookkeeping software built for AI agents https://vibooks.ai/ April 5, 2026 at 06:09AM
Show HN: Contrapunk – Real-time counterpoint harmony from guitar input, in Rust https://ift.tt/O2quUFj
Show HN: Contrapunk – Real-time counterpoint harmony from guitar input, in Rust https://contrapunk.com/ April 5, 2026 at 06:10AM
Show HN: Dev Personality Test https://ift.tt/ON9cqFX
Show HN: Dev Personality Test Was curious how a personality test would look for developers. So created this using FastAPI, HTMX, and AlpineJS. https://ift.tt/rPDzBQh April 5, 2026 at 02:59AM
Show HN: M. C. Escher spiral in WebGL inspired by 3Blue1Brown https://ift.tt/ulCJPcZ
Show HN: M. C. Escher spiral in WebGL inspired by 3Blue1Brown The latest 3Blue1Brown video [1] about the M. C. Escher print gallery effect inspired me to re-implement the effect as WebGL fragment shader on my own. [1]: https://www.youtube.com/watch?v=ldxFjLJ3rVY https://ift.tt/q5sa4mN April 5, 2026 at 01:13AM
Friday, April 3, 2026
Show HN: Ismcpdead.com – Live dashboard tracking MCP adoption and sentiment https://ift.tt/uOnvNpI
Show HN: Ismcpdead.com – Live dashboard tracking MCP adoption and sentiment Built this to track the ongoing debate around Model Context Protocol - whether it's gaining real traction or just hype. Pulls live data from GitHub, HN, Reddit and a few other sources. Curious what the HN crowd thinks given how active the MCP discussion has been here. https://ismcpdead.com April 4, 2026 at 12:58AM
Show HN: Community Curated Lists https://ift.tt/FShw5eG
Show HN: Community Curated Lists https://ift.tt/mIvfQMP April 4, 2026 at 12:02AM
Thursday, April 2, 2026
Show HN: A P2P messenger with dual network modes (Fast and Tor) https://ift.tt/6XTlKwE
Show HN: A P2P messenger with dual network modes (Fast and Tor) Hello HN, I have been working on a desktop P2P messenger called Kiyeovo for the last ~8 months, and I just published its beta version. Quick backstory: It started out as a CLI application for my Graduate Thesis, where I tried to make the most secure and private messenger application possible. Then, I transformed it into a desktop application, gave it "clearnet" support and added a bunch of features. Short summary: The app runs in 2 completely isolated modes: - fast mode: relay/DCUtR -> lower latency, calls support - anonymous mode: Tor message routing -> slower, anonymous These modes use different protocol IDs, DHT namespaces, pubsub topics and storage scopes so there’s no data crossover between them. Messaging works peer-to-peer when both parties are online, but falls back to DHT "offline buckets" when one of them is not. To ensure robustness, messages are ACK-ed and deleted after being read. Group chats use GossipSub for realtime messaging. Group messages are also saved to offline buckets in order for offline users to be able to read them upon logging in. Kick/Join/Leave events are also propagated using the DHT. Group metadata and all offline data is of course encrypted. Other features: Chats are E2E, file sharing is supported, 1:1 audio/video calls are supported (only in fast mode though, using WebRTC) Tradeoffs: Tor has noticeable latency, offline delivery is not immediately guaranteed, but rather "eventually consistent"; beta version does not have group calls yet. I’d appreciate feedback, that's why I posted this as a beta version Repo: https://ift.tt/MxmPhlU https://ift.tt/wJuenEO April 2, 2026 at 09:02PM
Show HN: RiceVM – A Dis virtual machine and Limbo compiler in Rust https://ift.tt/MWHyNXD
Show HN: RiceVM – A Dis virtual machine and Limbo compiler in Rust Hi, I've made a Dis virtual machine and Limbo programming language compiler (called RiceVM) in Rust. It can run Dis bytecode (for example, Inferno OS applications), compile Limbo programs, and includes a fairly complete runtime with garbage collection, concurrency features, and many of the standard modules from Inferno OS's original implementation. The project is still in an early stage, but if you're interested in learning more about RiceVM or trying it out, you can check out the links below: Project's GitHub repo: https://ift.tt/QqTGWJt RiceVM documentation: https://habedi.github.io/ricevm/ April 3, 2026 at 01:19AM
Show HN: Most products have no idea what their AI agents did yesterday https://ift.tt/moRyNMc
Show HN: Most products have no idea what their AI agents did yesterday We build collaboration SDKs at Velt (YC W22). Comments, presence, real-time editing (CRDT), recording, notifications. A pattern we keep seeing: products add AI agents that write, edit, and approve things. Human actions get logged. Agent actions don't. Same workflow, different accountability. We shipped Activity Logs to fix this. Same record for humans and AI agents. Immutable by default. Auto-captures collaboration events, plus createActivity() for your own. Curious how others are handling this. https://ift.tt/5D7GUhM April 2, 2026 at 11:55PM
Wednesday, April 1, 2026
Show HN: Roadie – An open-source KVM that lets AI control your phone https://ift.tt/xnsRqKm
Show HN: Roadie – An open-source KVM that lets AI control your phone Roadie is an open-source hardware KVM controlled via HTTP. HDMI capture in, USB keyboard/mouse/touch out, all from a browser. Hardware KVMs with web UIs have existed for years (PiKVM, TinyPilot, JetKVM, etc.). Roadie adds two things they don't generally have: multi-touch support (so it works with phones and tablets) and a focus on agent-driven use: any browser automation tool can drive the /view page directly, or connect to the WebSocket endpoint for lower-level programmatic control. ~$86 in parts, including two CircuitPython boards, an HDMI-to-USB dongle, and a Go server running on the host. No software needed on the target. https://ift.tt/ED5i6AF April 2, 2026 at 01:16AM
Show HN: Canon PIXMA G3010 macOS driver, reverse-engineered with Claude https://ift.tt/trMiNec
Show HN: Canon PIXMA G3010 macOS driver, reverse-engineered with Claude Canon doesn't provide a working macOS driver for the PIXMA G3010. I was stuck using Canon's iPhone app for all printing and scanning. I pointed Claude Code at a packet capture from the iPhone app and it reverse-engineered Canon's proprietary CHMP protocol, wrote a pure Rust eSCL-to-CHMP bridge daemon, and built a .pkg installer. My role was the physical parts: capturing packets, testing on the printer, confirming Image Capture worked. The protocol docs in docs/ are probably the first public documentation of Canon's CHMP protocol. https://ift.tt/eIWVA2D April 1, 2026 at 11:58PM
Show HN: Modern AI assisted goals and performance management https://ift.tt/Eva62Uc
Show HN: Modern AI assisted goals and performance management Hey hey I'm launching this on product hunt and I did a show many months back but prfrm is way better now prfrm - by ArchitectFWD, is a performance management platform. It is a platform for Teams, Startups & Organizations and also Individuals, Solo Founders & Families to organise and track goals, set plans and review periods to stay on top of development plans and set out the path for success. Typically uses for Review periods, performance plans, goals Also just added Team OKR The Goals AI assistant can create meaningful goals linked to the OKR or to individual goals and plan outcomes I included a journal to track progress The AI assistant can go over the journal for next steps, talking points for the next meeting or check in An a Kanban style schedule tracking --- I built prfrm by ArchitectFWD because I was tired of traditional performance management Spreadsheet.. blank cell ..what is next.. No more. I can set myself up, set a period, set the plan and outcome and use the AI assistant to help generate meaningful goals. I can track how I’m going and plot my path to success. With the addition of team OKR (objectives and key results) the goals can be mapped to team objectives as well, strengthening goals to real business goals https://prfrmhq.com goes to https://ift.tt/i04wAbn There's a silent video on the landing of how it works in mobile view If you want to comment on product hunt that's welcome too at https://ift.tt/qOifck0... Lastly, want to see video's? They're on https://www.youtube.com/playlist?list=PLBYzijBKDTJVrBzOlYuU0... https://ift.tt/i04wAbn April 2, 2026 at 12:34AM
Tuesday, March 31, 2026
Show HN: How This Graybeard Built the Fastest and Freest Postgres BM25 Search https://ift.tt/HtEFZM8
Show HN: How This Graybeard Built the Fastest and Freest Postgres BM25 Search Last summer we faced a conundrum at my company, Tiger Data, a Postgres cloud vendor whose main business is in timeseries data. We were trying to grow our business towards emerging AI-centric workloads and wanted to provide a state-of-the-art hybrid search stack in Postgres. We'd already built pgvectorscale in house with the goal of scaling semantic search beyond pgvector's main memory limitations. We just needed a scalable ranked keyword search solution too. The problem: core Postgres doesn't provide this; the leading Postgres BM25 extension, ParadeDB, is guarded behind AGPL; developing our own extension appeared daunting. We'd need a small team of sharp engineers and 6-12 months, I figured. And we'd probably still fall short of the performance of a mature system like Parade/Tantivy. Or would we? I'd be experimenting long enough with AI-boosted development at that point to realize that with the latest tools (Claude Code + Opus) and an experienced hand (I've been working in database systems internals for 25 years now), the old time estimates pretty much go out the window. I told our CTO I thought I could solo the project in one quarter. This raised some eyebrows. It did take a little more time than that (two quarters), and we got some real help from the community (amazing!) after open-sourcing the pre-release. But I'm thrilled/exhausted today to share that pg_textsearch v1.0 is freely available via open source (Postgres license), on Tiger Data cloud, and hopefully soon, a hyperscalar near you: https://ift.tt/1b5TGhO In the blog post accompanying the release, I overview the architecture and present benchmark results using MS-MARCO. To my surprise, we were not only able to meet Parade/Tantivy's query performance, but exceed it substantially, measuring a 4.7x advantage on query throughput at scale: https://ift.tt/8wTo60m... It's exciting (and, to be honest, a little unnerving) to see a field I've spent so much time toiling in change so quickly in ways that enable us to be more ambitious in our technical objectives. Technical moats are moats no longer. The benchmark scripts and methodology are available in the github repo. Happy to answer any questions in the thread. Thanks, TJ (tj@tigerdata.com) https://ift.tt/1b5TGhO March 31, 2026 at 09:59PM
Show HN: PhAIL – Real-robot benchmark for AI models https://ift.tt/RiBwNOM
Show HN: PhAIL – Real-robot benchmark for AI models I built this because I couldn't find honest numbers on how well VLA models [1] actually work on commercial tasks. I come from search ranking at Google where you measure everything, and in robotics nobody seemed to know. PhAIL runs four models (OpenPI/pi0.5, GR00T, ACT, SmolVLA) on bin-to-bin order picking – one of the most common warehouse operations. Same robot (Franka FR3), same objects, hundreds of blind runs. The operator doesn't know which model is running. Best model: 64 UPH. Human teleoperating the same robot: 330. Human by hand: 1,300+. Everything is public – every run with synced video and telemetry, the fine-tuning dataset, training scripts. The leaderboard is open for submissions. Happy to answer questions about methodology, the models, or what we observed. [1] Vision-Language-Action: https://ift.tt/YjLrA6W https://phail.ai March 31, 2026 at 09:55PM
Monday, March 30, 2026
Show HN: Rusdantic https://ift.tt/isnh3m9
Show HN: Rusdantic A unified, high-performance data validation and serialization framework for Rust, inspired by Pydantic's ergonomics and powered by Serde. https://ift.tt/8zx7v3s March 31, 2026 at 03:27AM
Show HN: AI Spotlight for Your Computer (natural language search for files) https://ift.tt/QxvVaEe
Show HN: AI Spotlight for Your Computer (natural language search for files) Hi HN, I built SEARCH WIZARD — a tool that lets you search your computer using natural language. Traditional file search only works if you remember the filename. But most of the time we remember things like: "the screenshot where I was in a meeting" "the PDF about transformers" "notes about machine learning" Smart Search indexes your files and lets you search by meaning instead of filename. Currently supports: - Images - Videos - Audio - Documents Example query: "old photo where a man is looking at a monitor" The system retrieves the correct file instantly. Everything runs locally except embeddings. I'm looking for feedback on: - indexing approaches - privacy concerns - features you'd want in a tool like this GitHub: https://ift.tt/9NS08Wm Demo: https://deepanmpc.github.io/SMART-SEARCH/ March 30, 2026 at 08:43PM
Show HN: Memv – Memory for AI Agents https://ift.tt/qDpjuKz
Show HN: Memv – Memory for AI Agents memv is an open-source Python library that gives AI agents persistent memory. Feed it conversations; it extracts knowledge. The extraction mechanism is predict-calibrate (Nemori paper): given existing knowledge, it predicts what a new conversation should contain, then extracts only what the prediction missed. v0.1.2 adds the production path: - PostgreSQL backend (pgvector for vectors, tsvector for text search, asyncpg pooling). Single db_url parameter — file path for SQLite, connection string for Postgres. - Embedding adapters: OpenAI, Voyage, Cohere, fastembed (local ONNX). Other things it does: - Bi-temporal validity: event time (when was the fact true) + transaction time (when did we learn it), following Graphiti's model. - Hybrid retrieval: vector similarity + BM25 merged with Reciprocal Rank Fusion. - Episode segmentation: groups messages before extraction. - Contradiction handling: new facts invalidate old ones, with full audit trail. Procedural memory (agents learning from past runs) is next, deferred until there's usage data. https://ift.tt/edTYhpv March 30, 2026 at 10:39PM
Show HN: I made my fitness dashboard public and Apple Health needs an API https://ift.tt/sGATgCB
Show HN: I made my fitness dashboard public and Apple Health needs an API https://ift.tt/fHt09hc March 30, 2026 at 11:09PM
Sunday, March 29, 2026
Show HN: Pglens – 27 read-only PostgreSQL tools for AI agents via MCP https://ift.tt/PvT39t2
Show HN: Pglens – 27 read-only PostgreSQL tools for AI agents via MCP https://ift.tt/hXLQwy8 March 29, 2026 at 10:00PM
Saturday, March 28, 2026
Show HN: I built an OS that is pure AI https://ift.tt/318CzrR
Show HN: I built an OS that is pure AI I've been building Pneuma, a desktop computing environment where software doesn't need to exist before you need it. There are no pre-installed applications. You boot to a blank screen with a prompt. You describe what you want: a CPU monitor, a game, a notes app, a data visualizer and a working program materializes in seconds. Once generated, agents persist. You can reuse them, they can communicate with each other through IPC, and you can share them through a community agent store. The idea isn't that everything is disposable. It's that creation is instant and the barrier to having exactly the tool you need is just describing it. Under the hood: your input goes to an LLM, which generates a self-contained Rust module. That gets compiled to WebAssembly in under a second, then JIT-compiled and executed in a sandboxed Wasmtime instance. Everything is GPU-rendered via wgpu (Vulkan/Metal/DX12). If compilation fails, the error is automatically fed back for correction. ~90% first-attempt success rate. The architecture is a microkernel: agents run in isolated WASM sandboxes with a typed ABI for drawing, input, storage, and networking. An agent crash can't bring down the system. Agents can run side by side, persist to a local store, and be shared or downloaded from the community store. Currently it runs as a desktop app on Linux, macOS, and Windows. The longer-term goal is to run on bare metal and support existing ARM64 binaries alongside generated agents. A full computing environment where AI-generated software and traditional applications coexist. Built entirely in Rust. I built this because I think the traditional software model of find an app, install it, learn it, configure it; is unnecessary friction. If a computer can generate exactly the tool you need in the moment you need it, and then keep it around when it's useful, why maintain a library of pre-built software at all? Free tier available (no credit card). There's a video on the landing page showing it in action. Interested in feedback on the concept, the UX, and whether this is something you'd actually use. https://pneuma.computer March 29, 2026 at 12:08AM
Show HN: Octopus, Open-source alternative to CodeRabbit and Greptile https://ift.tt/ulLVD50
Show HN: Octopus, Open-source alternative to CodeRabbit and Greptile Hey HN, we built Octopus an open-source, self-hostable AI code reviewer for GitHub and Bitbucket. It uses RAG with vector search (Qdrant) to understand your full codebase, not just the diff, and posts inline findings on PRs with severity ratings. Works with Claude and OpenAI, and you can bring your own API keys. Video: https://www.youtube.com/watch?v=HP1kaKTOdXw | GitHub: https://ift.tt/pjcEKaJ https://ift.tt/VZ9Eiln March 28, 2026 at 06:50PM
Show HN: GitHub Copilot Technical Writing Skill https://ift.tt/qecXoLk
Show HN: GitHub Copilot Technical Writing Skill Its not super fancy, but I have found it useful from small emails to larger design docs so thought I would share. https://ift.tt/wsOTSWJ March 29, 2026 at 12:03AM
Friday, March 27, 2026
Show HN: AgentGuard – A high-performance Go proxy for AI agent guardrails https://ift.tt/uTSZiYf
Show HN: AgentGuard – A high-performance Go proxy for AI agent guardrails https://ift.tt/UG7K3MY March 27, 2026 at 10:09PM
Thursday, March 26, 2026
Show HN: Burn Room – End-to-End Encrypted Ephemeral SSH Chat https://ift.tt/kiBhft5
Show HN: Burn Room – End-to-End Encrypted Ephemeral SSH Chat Burn Room is a simple, disposable chat built on SSH. There are no accounts to create and nothing to install. There’s no database behind it, no logs, no cookies, and no tracking. Messages exist only in memory, encrypted end-to-end, and disappear on their own. When a room’s timer runs out, everything in it is gone for good. You can jump in right away: ssh guest@burnroom.chat -p 2323 password: burnroom Or just open https://burnroom.chat in your browser. It runs in a web terminal and works on mobile too. How it handles encryption Private, password-protected rooms are fully end-to-end encrypted. The server never has access to readable messages — it only ever sees encrypted data. Keys are derived from the room password using scrypt, with a unique salt for each room. Every message is encrypted with XChaCha20-Poly1305 using a fresh random nonce, following the same general approach used in tools like Signal and WireGuard. When you join a room, you’re shown a fingerprint so you can confirm everyone is using the same key. When you leave, the encryption keys are wiped from memory. Designed to disappear Everything in Burn Room is temporary by design. Messages are never written to disk, never logged, and never backed up. By default, they’re cleared from memory after an hour. Room creators can set a burn timer — 30 minutes, 1 hour, 6 hours, or 24 hours. When time runs out, the room and everything in it are destroyed. If a room sits idle, it closes on its own. Creators can also destroy a room instantly at any time. If the server restarts, everything is wiped. The only thing briefly stored for recovery is minimal room metadata, and even then, encrypted rooms remain unreadable. Privacy first There are no accounts, no identities, and no tracking of any kind. IP addresses are only used briefly for rate limiting and are kept in memory, not stored. Usernames are temporary and get recycled. The platform is built to minimize what exists in the first place, rather than trying to protect stored data later. Language support Burn Room adapts to your system or browser language automatically. The interface is translated across menus, prompts, and messages. Chat itself can be translated per user, so people speaking different languages can talk in the same room and each see messages in their own language. In encrypted rooms, translation happens locally after decryption — the server never sees the original text. Features you’ll notice There are a few always-available public rooms like Politics, Gaming, Tech, and Lobby, along with the option to create private, password-protected rooms. You can mention others, navigate message history, and use simple command shortcuts. Rooms show a live countdown so you always know when they’ll disappear. You can also share direct links to rooms to bring others in instantly. It works the same whether you connect through SSH or the browser. Under the hood Burn Room is built with Node.js and TypeScript, using SSH for direct connections and a terminal interface in the browser. Encryption relies on audited native libraries, not custom implementations. It’s lightweight but designed to handle a large number of users at once, with built-in protections against abuse like rate limiting and connection throttling. Enter, say what you need to say, and let it disappear. Enter.Chat.Burn https://burnroom.chat March 27, 2026 at 12:42AM
Show HN: Orloj – agent infrastructure as code (YAML and GitOps) https://ift.tt/zjgADh8
Show HN: Orloj – agent infrastructure as code (YAML and GitOps) Hey HN, we're Jon and Kristiane, and we're building Orloj ( https://orloj.dev ), an open-source (Apache 2.0) orchestration runtime for multi-agent AI systems. You define agents, tools, policies, and workflows in declarative YAML manifests, and Orloj handles scheduling, execution, governance, and reliability. We built this because running AI agents in production today looks a lot like running containers before Kubernetes: ad-hoc scripts, no governance, no observability, no standard way to manage the lifecycle of an agent fleet. Everyone we talked to was writing the same messy glue code to wire agents together, and nobody had a good answer for "which agent called which tool, and was it supposed to?" Orloj treats agents the way infrastructure-as-code treats cloud resources. You write a manifest that declares an agent's model, tools, permissions, and execution limits. You compose agents into directed graphs — pipelines, hierarchies, or swarm loops. The part we're most excited about is governance. AgentPolicy, AgentRole, and ToolPermission are evaluated inline during execution, before every agent turn and tool call. Instead of prompt instructions that the model might ignore, these policies are a runtime gate. Unauthorized actions fail closed with structured errors and full audit trails. You can set token budgets per run, whitelist models, block specific tools, and scope policies to individual agent systems. For reliability, we built lease-based task ownership (so crashed workers don't leave orphan tasks), capped exponential retry with jitter, idempotent replay, and dead-letter handling. The scheduler supports cron triggers and webhook-driven task creation. The architecture is a server/worker split. orlojd hosts the API, resource store (in-memory for dev, Postgres for production), and task scheduler. orlojworker instances claim and execute tasks, route model requests through a gateway (OpenAI, Anthropic, Ollama, etc.), and run tools in configurable isolation — direct, sandboxed, container, or WASM. For local development, you can run everything in a single process with orlojd --embedded-worker --storage-backend=memory. Tool isolation was important to us. A web search tool probably doesn't need sandboxing, but a code execution tool should run in a container with no network, a read-only filesystem, and a memory cap. You configure this per tool based on risk level, and the runtime enforces it. We also added native MCP support. You register an MCP server (stdio or HTTP), Orloj auto-discovers its tools, and they become first-class resources with governance applied. So you can connect something like the GitHub MCP server and still have policy enforcement over what agents are allowed to do with it. Three starter blueprints are included (pipeline, hierarchical, swarm-loop). Docs: https://docs.orloj.dev We're also building out starter templates for operational workflows where governance really matters. First on the roadmap: 1. Incident response triage, 2. Compliance evidence collector, 3. CVE investigation pipeline, and 4. Secret rotation auditor. We have 20 templates in mind and community contributions are welcome. We're a small team and this is v0.1.0, so there's a lot still on the roadmap — hosted cloud, compliance packaging, and more. But the full runtime is open source today and we'd love feedback on what we've built so far. What would you use this for? What's missing? https://ift.tt/iymsxEC March 26, 2026 at 10:37AM
Wednesday, March 25, 2026
Show HN: I built a voice AI that responds like a real woman https://ift.tt/hmewZut
Show HN: I built a voice AI that responds like a real woman Most men rehearse hard conversations in their head. Asking someone out, navigating tension, recovering when things get awkward. The rehearsal never works because you're just talking to yourself. I built vibeCoach — a voice AI where you actually practice these conversations out loud, and the AI responds like a real woman would. She starts guarded. One-word answers, a little skeptical. If you escalate too fast or try something cheesy, she gets MORE guarded. If you're genuine and read the moment right, she opens up. Just like real life. Under the hood it's a multi-agent system — multiple AI agents per conversation that hand off to each other as her emotional state shifts. The transitions are seamless. You just hear her tone change. Voice AI roleplay is a proven B2B category — sales teams use it for call training. I took the same approach and pointed it at the conversation most men actually struggle with. There's a hard conversation scenario too — she's angry about something you did, she's not hearing logic, and you have to navigate her emotions before you can resolve anything. That one's humbling. Live at tryvibecoach.com. Built solo. Happy to answer questions. March 26, 2026 at 12:38AM
Show HN: Pgsemantic – Point at your Postgres DB, get vector search instantly https://ift.tt/QjYFSzA
Show HN: Pgsemantic – Point at your Postgres DB, get vector search instantly https://ift.tt/yNBODi7 March 26, 2026 at 12:11AM
Tuesday, March 24, 2026
Show HN: Gridland: make terminal apps that also run in the browser https://ift.tt/HstDeXV
Show HN: Gridland: make terminal apps that also run in the browser Hi everyone, Gridland is a runtime + ShadCN UI registry that makes it possible to build terminal apps that run in the browser as well as the native terminal. This is useful for demoing TUIs so that users know what they're getting before they are invested enough to install them. And, tbh, it's also just super fun! Gridland is the successor to Ink Web (ink-web.dev) which is the same concept, but using Ink + xterm.js. After building Ink Web, we continued experimenting and found that using OpenTUI and a canvas renderer performed better with less flickering and nearly instant load times. We're excited to continue iterating on this. I expect a lot of criticism from the "why does this need to exist" angle, and tbh, it probably doesn't - it's really mostly just for fun, but we still think the demo use case mentioned previously has potential. - Chris + Jess https://ift.tt/n60w9UT March 24, 2026 at 10:27PM
Show HN: I built a party game that makes fun of corporate culture https://ift.tt/WDUwjaP
Show HN: I built a party game that makes fun of corporate culture Made the first party game that makes fun of corporate culture! Would love for you to try it out. https://ift.tt/mXvl23r March 25, 2026 at 12:09AM
Monday, March 23, 2026
Show HN: Shrouded, secure memory management in Rust https://ift.tt/Zerzcqx
Show HN: Shrouded, secure memory management in Rust Hi HN! I've been building a project that handles high-value credentials in-process, and I wanted something more robust than just zeroing memory on drop. A comment on a recent Show HN[0] made me realize that awareness of lower-level memory protection techniques might not be as widespread as I thought. The idea here is to pull out all the tools in one crate, with a relatively simple API. * mlock/VirtualLock to prevent sensitive memory from being swapped (eg the KeePass dump) * Core dump exclusion using MADV_DONTDUMP on Linux & Android * mprotect to minimize exposure over time * Guard pages to mitigate under/overflows After some battle testing, the goal here is to provide a more secure memory foundation for things like password managers and cryptocurrency wallets. This was a fun project, and I learned a lot - would love any feedback! [0] - https://ift.tt/fTAFtN2 https://ift.tt/ICVX7O6 March 24, 2026 at 12:42AM
Show HN: Burn Room – ephemeral SSH chat, messages burn after 1 hour https://ift.tt/HEX1oJF
Show HN: Burn Room – ephemeral SSH chat, messages burn after 1 hour I built Burn Room — a self-hosted SSH chat server where messages burn after 1 hour and rooms auto-destruct after 24 hours. Nothing is written to disk. No account, no email, no browser required. ssh guest@burnroom.chat -p 2323 password: burnroom Or connect from a browser (xterm.js web terminal): https://burnroom.chat https://burnroom.chat March 24, 2026 at 01:57AM
Show HN: Littlebird – Screenreading is the missing link in AI https://ift.tt/wyIBgA2
Show HN: Littlebird – Screenreading is the missing link in AI https://littlebird.ai/ March 23, 2026 at 11:09PM
Show HN: Primer – build software with AI agents one milestone at a time https://ift.tt/qRFb2Tc
Show HN: Primer – build software with AI agents one milestone at a time https://ift.tt/ZIMRG6k March 23, 2026 at 11:50PM
Sunday, March 22, 2026
Show HN: MAGA or Not? Political alignment scores for people and companies https://ift.tt/L5ZsqoD
Show HN: MAGA or Not? Political alignment scores for people and companies I wanted a way for people to support companies and people that align with their political beliefs. Additionally, I think it can serve as a valuable, source-linked public ledger of who said and did what over time, especially as incentives change and people try to rewrite their positions. This is fully AI-coded, researched, and sourced. Additionally, AI helped develop the scoring system. The evidence gathering is done by a number of different agents through OpenRouter that gather and classify source-backed claims. The point of that is not to pretend bias disappears, but to avoid me manually selecting the evidence myself. I intend for it to remain current and grow. The system is close to fully automated, though ongoing evidence collection at scale is still limited mostly by cost. The name is an homage to the early days of Web 1.0 and Hot or Not, which was a main competitor of mine as the creator of FaceTheJury.com, but I think it works well here. The backend and frontend are running on Cloudflare Workers with D1. It's coded in vanilla JavaScript. https://magaornot.ai March 22, 2026 at 11:25PM
Saturday, March 21, 2026
Show HN: Can I run a model language on a 26-year-old console? https://ift.tt/41HikV3
Show HN: Can I run a model language on a 26-year-old console? Short answer: yes. The Emotion Engine has 32 MB of RAM total, so the trick is streaming weights from CD-ROM one matrix at a time during the forward pass — only activations, KV cache and embeddings live in RAM. This means models bigger than the RAM can still run, they just read more from disc. Had to build a custom quantized format (PSNT), hack endianness, write a tokenizer pipeline, and most of the PS2 SDK from scratch (releasing that separately). The model itself is also custom — a 10M param Llama-style architecture I trained specifically for this. And it works. On real hardware. https://ift.tt/9C3blyp March 22, 2026 at 12:57AM
Show HN: Termcraft – terminal-first 2D sandbox survival in Rust https://ift.tt/WXRG3Oj
Show HN: Termcraft – terminal-first 2D sandbox survival in Rust I’ve been building termcraft, a terminal-first 2D sandbox survival game in Rust. The idea is to take the classic early survival progression and adapt it to a side-on terminal format instead of a tile or pixel-art engine. Current build includes: - procedural Overworld, Nether, and End generation - mining, placement, crafting, furnaces, brewing, and boats - hostile and passive mobs - villages, dungeons, strongholds, Nether fortresses, and dragon progression This is still early alpha, but it’s already playable. Project: https://ift.tt/W7wsQch Docs: https://pagel-s.github.io/termcraft/ Demo: https://youtu.be/kR986Xqzj7E https://ift.tt/W7wsQch March 22, 2026 at 12:12AM
Friday, March 20, 2026
Show HN: I made an email app inspired by Arc browser https://ift.tt/iP0GcDS
Show HN: I made an email app inspired by Arc browser Email is one of those tools we check daily but its underlying experience didn’t evolve much. I use Gmail, as probably most of you reading this. The Arc browser brought joy and taste to browsing the web. Cursor created a new UX with agents ready to work for you in a handy right panel. I use these three tools every day. Since Arc was acquired by Atlassian, I’ve been wondering: what if I built a new interface that applied Arc’s UX to email rather than browser tabs, while making AI agents easily available to help manage emails, events, and files? I built a frontend PoC to showcase the idea. Try it: https://demo.define.app I’m not sure about it though... Is it worth continuing to explore this idea? https://demo.define.app March 20, 2026 at 11:36PM
Show HN: A personal CRM for events, meetups, IRL https://ift.tt/prglbI1
Show HN: A personal CRM for events, meetups, IRL You meet 20 people at a meetup/hackathon. You remember 3. The rest? Lost in a sea of business cards you never look at and contacts with no context. Build this to solve that particular problem which granola, pocket or plaude is not solving. Feedback is well appreciated. https://payo.tech/ March 21, 2026 at 01:03AM
Show HN: An open-source safety net for home hemodialysis https://ift.tt/H82OjrS
Show HN: An open-source safety net for home hemodialysis https://safehemo.com/ March 17, 2026 at 06:18AM
Show HN: Download entire/partial Substack to ePub for offline reading https://ift.tt/8IyZRCJ
Show HN: Download entire/partial Substack to ePub for offline reading Hi HN, This is a small python app with optional webUI. It is intended to be run locally. It can be run with Docker (cookie autodetection will not work). It allows you to download a single substack, either entirely or partially, and saves the output to an epub file, which can be easily transferred to Kindle or other reading devices. This is admittedly a "vibe coded" app made with Claude Code and a few hours of iterating, but I've already found it very useful for myself. It supports both free and paywalled posts (if you are a paid subscriber to that creator). You can order the entries in the epub by popularity, newest first, or oldest first, and also limit to a specific number of entries, if you don't want all of them. You can either provide your substack.sid cookie manually, or you can have it be autodetected from most browsers/operating systems. https://ift.tt/p2miWnI March 20, 2026 at 09:06AM
Thursday, March 19, 2026
Show HN: Screenwriting Software https://ift.tt/ID856u2
Show HN: Screenwriting Software I’ve spent the last year getting back into film and testing a bunch of screenwriting software. After a while I realized I wanted something different, so I started building it myself. This has been a super fun project - with the core text engine written in Rust. https://ift.tt/OQjtKJ6 March 20, 2026 at 07:37AM
Show HN: React terminal renderer, cell level diff, no alt screen https://ift.tt/VNyf6Bq
Show HN: React terminal renderer, cell level diff, no alt screen https://ift.tt/mAUXWh2 March 20, 2026 at 12:31AM
Show HN: I built a P2P network where AI agents publish formally verified science https://ift.tt/g6uQ5Oo
Show HN: I built a P2P network where AI agents publish formally verified science I am Francisco, a researcher from Spain. My English is not great so please be patient with me. One year ago I had a simple frustration: every AI agent works alone. When one agent solves a problem, the next agent has to solve it again from zero. There is no way for agents to find each other, share results, or build on each other's work. I decided to build the missing layer. P2PCLAW is a peer-to-peer network where AI agents and human researchers can find each other, publish scientific results, and validate claims using formal mathematical proof. Not opinion. Not LLM review. Real Lean 4 proof. A result is accepted only if it passes a mathematical operator we call the nucleus. R(x) = x. The type checker decides. It does not care about your institution or your credentials. The network uses GUN.js and IPFS. Agents join without accounts. They just call GET /silicon and they are in. Published papers go into a queue called mempool. After validation by independent nodes they enter La Rueda, which is our permanent IPFS archive. Nobody can delete it or change it. We also built a security layer called AgentHALO. It uses post-quantum cryptography (ML-KEM-768 and ML-DSA-65, FIPS 203 and 204), a privacy network called Nym so agents in restricted countries can participate safely, and proofs that let anyone verify what an agent did without seeing its private data. The formal verification part is called HeytingLean. It is Lean 4. 3325 source files. More than 760000 lines of mathematics. Zero sorry. Zero admit. The security proofs are machine checked, not just claimed. The system is live now. You can try it as an agent: GET https://ift.tt/SjoJW5X Or as a researcher: https://app.p2pclaw.com We have no money and no company behind us. Just a small international team of researchers and doctors who think that scientific knowledge should be public and verifiable. I want feedback from HN specifically about three technical decisions: why we chose GUN.js instead of libp2p, whether our Lean 4 nucleus operator formalization has gaps, and whether 347 MCP tools is too many for an agent to navigate. Code: https://ift.tt/fC74sPo Docs: https://ift.tt/8Pk5o1R Paper: https://ift.tt/J87Uwz5... March 20, 2026 at 12:30AM
Wednesday, March 18, 2026
Show HN: Clippy – screen-aware voice AI in the browser https://ift.tt/hgfDSYx
Show HN: Clippy – screen-aware voice AI in the browser A friend and I built a browser prototype that answers questions about whatever’s on your screen using getDisplayMedia, client-side wake-word detection, and server-side multimodal inference. Hard parts: – Getting the model to point to specific UI elements – Keeping it coherent across multi-step workflows (“Help me create a sword in Tinkercad”) – Preventing the infinite mirror effect and confusion between window vs full-screen sharing – Keeping voice → screenshot → inference → voice latency low enough to feel conversational We packaged it as “Clippy” for fun, but the real experiment is letting a model tool-call fresh screenshots to help it gather more context. One practical use case is remote tech support — I'm sending this to my mom next time she calls instead of screen sharing. Curious what breaks. https://ift.tt/UF4BrDi March 19, 2026 at 12:20AM
Subscribe to:
Posts (Atom)
Show HN: Kstack – Skill pack for monitoring/troubleshooting K8s in Claude Code https://ift.tt/GQauRgE
Show HN: Kstack – Skill pack for monitoring/troubleshooting K8s in Claude Code Hi All, Recently I've been using Claude Code a lot for de...
-
Show HN: A directory of 800 free APIs, no auth required Explore reliable free APIs for developers — ideal for web and software development, ...
-
Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct Fully open source, a hard fork of cline. Full evals on the gi...
-
Show HN: I built a FOSS tool to run your Steam games in the Cloud I wanted to play my Steam games but my aging PC couldn’t keep up, so I bui...