Friday, October 31, 2025

Show HN: 24-hour Halloween radio station hosted by Dr. Eleven https://ift.tt/QRHYpEj

Show HN: 24-hour Halloween radio station hosted by Dr. Eleven I built a 24h Halloween radio stream hosted by Dr. Eleven, using ElevenLabs and HLS for delivery. Excited for you to hear it + open to any feedback. https://ift.tt/3WQFxsj November 1, 2025 at 04:10AM

Show HN: First5Minutes, Your first 5 minutes decide your day https://ift.tt/63emKdH

Show HN: First5Minutes, Your first 5 minutes decide your day Hi everyone I have been experimenting with a simple idea. What if the first five minutes of your day decided the rest? I built First5Minutes, a small web app that helps you start strong. You choose one meaningful mission for the day and complete it with quick photo, video or text proof. I created it to fix my own habit of overplanning and not starting. The focus is on doing one real thing each day, not maintaining long to do lists. Key features: • One mission per day for focus • Quick proof capture with photo, video or text • Optional partner verification for accountability • Streaks based on proof, not checkmarks Try it here → https://ift.tt/6WDEtSx No install or sign up wall. Just start a mission. I would love your feedback on: • Is this level of simplicity helpful or limiting • What part made or failed to make you feel you actually did something • Any friction in completing your first mission Built with Next.js, Supabase and Clerk. Thanks for checking it out. I appreciate your time and thoughts. https://ift.tt/iXRNZg1 November 1, 2025 at 01:42AM

Show HN: A chess middlegame trainer so I can stop blundering https://ift.tt/8aWDtdj

Show HN: A chess middlegame trainer so I can stop blundering https://dontblunder.com November 1, 2025 at 01:00AM

Thursday, October 30, 2025

Show HN: Ellipticc Drive – open-source cloud drive with E2E and PQ encryption https://ift.tt/madTX1b

Show HN: Ellipticc Drive – open-source cloud drive with E2E and PQ encryption Hey HN, I’m Ilias, 19, from Paris. I built Ellipticc Drive, an open-source cloud drive with true end-to-end encryption and post-quantum security, designed to be Dropbox-like in UX but with zero access to your data, even by the host. What’s unique: Free 10GB for every user, forever. Open-source frontend (audit or self-host if you want) Tech stack: Frontend: Next.js Crypto: WebCrypto (hashing) + Noble (core primitives) Encryption: XChaCha20-Poly1305 (file chunks) Key wrapping: Kyber (ML-KEM768) Signing: Ed25519 + Dilithium2 (ML-DSA65) Key derivation: Argon2id → Master Key → encrypts all keypairs & CEKs Try it live: https://ellipticc.com Frontend source: https://ift.tt/9pKdm8Q Would love feedback from devs and security folks — particularly on encryption flow, architecture, or UX. I’ll be around to answer every technical question in the comments! https://ellipticc.com October 31, 2025 at 01:00AM

Show HN: ekoAcademic – Convert ArXiv papers to interactive podcasts https://ift.tt/V8Uu2Fo

Show HN: ekoAcademic – Convert ArXiv papers to interactive podcasts https://ift.tt/wOejzvn October 31, 2025 at 02:33AM

Show HN: Meals You Love – AI-powered meal planning and grocery shopping https://ift.tt/QOAx1ou

Show HN: Meals You Love – AI-powered meal planning and grocery shopping Meals You Love is a meal planning app that creates weekly meal plans tailored to your tastes and dietary preferences. It integrates with Kroger and Instacart's APIs so you can add your meal plan groceries directly to your cart. You can also import your own recipes to include alongside AI suggestions. I originally built this to help my wife with meal planning and grocery shopping. We were always struggling to decide what to make and inevitably forgot ingredients. Most meal planners felt too rigid or generic, and few handled the grocery side well (or at all). We've also used meal kits like Home Chef in the past but they end up being quite expensive and produce a comical amount of packaging waste, plus you still wind up needing to purchase groceries anyway. In all honesty, I also wanted an excuse to try building something "real" using AI and to see if it could be used in an actually useful manner. Would love feedback from anyone interested in food, meal planning, or product design! Tech stack: - Cloud Run - Firestore - Vertex AI / Gemini https://ift.tt/w2Erji8 https://ift.tt/w2Erji8 October 27, 2025 at 11:27PM

Show HN: I made CSV files double-click to open in Google Sheets instead of Excel https://ift.tt/CTRv5ku

Show HN: I made CSV files double-click to open in Google Sheets instead of Excel I built my first macOS app to automatically open csv, xls files in Google Sheets. I work as marketing, revops person and often have to combine data from different platforms for reporting purposes. Google made the import flow super broken with too many clicks in between. So I built a simple solution that saves me some time. Sharing it here, you can test it out for free. No subscription bullshit, one time payment to get unlimited usage if you like it. Happy double clicking! https://csvtosheets.com October 30, 2025 at 11:25PM

Wednesday, October 29, 2025

Tuesday, October 28, 2025

Show HN: Rewriting Scratch 3.0 from scratch in Lua (browser-free native runtime) https://ift.tt/Vp2gSFA

Show HN: Rewriting Scratch 3.0 from scratch in Lua (browser-free native runtime) Built a native Scratch 3.0 runtime in Lua that runs .sb3 projects without a browser. Why? Browser sandboxing prevents access to hardware features (haptics, sensors, fine-grained perf controls). Native runtime gives you direct hardware access and lets you deploy to consoles, handhelds, embedded devices. Also means much smaller binaries (LÖVE is ~7MB vs 50-100MB for Electron). How it works: - Scratch blocks compile to IR, then optimize, then generate Lua - LuaJIT executes the compiled code - Coroutine-based threading for concurrent scripts - Lazy loading + LRU cache for memory management - SVG support via resvg FFI ~100% compatible with Scratch 3.0 blocks. Extensions that need JavaScript won't work (no Music, TTS, Video Sensing), but core blocks are there. Built on LÖVE framework, so it's cross-platform (desktop, mobile, gaming devices). Still rough around the edges (user input not implemented yet, cloud variables only work locally), but it runs real Scratch projects today. https://ift.tt/zpj0QZi October 28, 2025 at 11:08PM

Show HN: Pipelex – declarative language for repeatable AI workflows (MIT) https://ift.tt/nYv3XW5

Show HN: Pipelex – declarative language for repeatable AI workflows (MIT) We’re Robin, Louis, and Thomas. Pipelex is a DSL and a Python runtime for repeatable AI workflows. Think Dockerfile/SQL for multi-step LLM pipelines: you declare steps and interfaces; any model/provider can fill them. Why this instead of yet another workflow builder? - Declarative, not glue code: you state what to do; the runtime figures out how. - Agent-first: each step carries natural-language context (purpose, inputs/outputs with meaning) so LLMs can follow, audit, and optimize. Our MCP server enables agents to run pipelines but also to build new pipelines on demand. - Open standard under MIT: language spec, runtime, API server, editor extensions, MCP server, n8n node. - Composable: pipes can call other pipes, created by you or shared in the community. Why a domain-specific language? - We need context, meaning and nuances preserved in a structured syntax that both humans and LLMs can understand - We need determinism, control, and reproducibility that pure prompts can't deliver - Bonus: editors, diffs, semantic coloring, easy sharing, search & replace, version control, linters… How we got there: Initially, we just wanted to solve every use-case with LLMs but kept rebuilding the same agentic patterns across different projects. So we challenged ourselves to keep the code generic and separate from use-case specifics, which meant modeling workflows from the relevant knowledge and know-how. Unlike existing code/no-code frameworks for AI workflows, our abstraction layer doesn't wrap APIs, it transcribes business logic into a structured, unambiguous script executable by software and AI. Hence the "declarative" aspect: the script says what should be done, not how to do it. It's like a Dockerfile or SQL for AI workflows. Additionally, we wanted the language to be LLM-friendly. Classic programming languages hide logic and context in variable names, functions, and comments: all invisible to the interpreter. In Pipelex, these elements are explicitly stated in natural language, giving AI full visibility: it's all logic and context, with minimal syntax. Then, we didn't want to write Pipelex scripts ourselves so we dogfooded: we built a Pipelex workflow that writes Pipelex workflows. It's in the MCP and CLI: "pipelex build pipe '…'" runs a multi-step, structured generation flow that produces a validated workflow ready to execute with "pipelex run". Then you can iterate on it yourself or with any coding agent. What’s included: Python library, FastAPI and Docker, MCP server, n8n node, VS Code extension. What we’d like from you 1. Build a workflow: did the language work for you or against you? 2. Agent/MCP workflows and n8n node usability. 3. Suggest new kinds of pipes and other AI models we could integrate 4. Looking for OSS contributors to the core library but also to share pipes with the community Known limitations - Connectors: Pipelex doesn’t integrate with “your apps”, we focus on the cognitive steps, and you can integrate through code/API or using MCP or n8n - Visualization: we need to generate flow-charts - The pipe builder is still buggy - Run it yourself: we don’t yet provide a hosted Pipelex API, it’s in the works - Cost-tracking: we only track LLM costs, not image generation or OCR costs yet - Caching and reasoning options: not supported yet Links - GitHub: https://ift.tt/NkvGegq - Cookbook: https://ift.tt/m0JYX4S - Starter: https://ift.tt/JFh2xq1 - VS Code extension: https://ift.tt/gtS74jY - Docs: [ https://ift.tt/yInqr05 ]( https://ift.tt/WoAdSVl ) - Demo video (2 min): https://youtu.be/dBigQa8M8pQ - Discord for support and sharing: https://ift.tt/o2WSZzN Thanks for reading. If you try Pipelex, tell us exactly where it hurts, that’s the most valuable feedback we can get. https://ift.tt/NkvGegq October 28, 2025 at 09:49PM

Monday, October 27, 2025

Show HN: Easily visualize torch, Jax, tf, NumPy, etc. tensors https://ift.tt/dCQ4O3B

Show HN: Easily visualize torch, Jax, tf, NumPy, etc. tensors hey hn, wrote a python library for myself to visualize tensors. Makes learning and debugging deep learning code so much easier. Works seamlessly with colab/jupyter notebooks, and other python contexts. It's built on top of the graphics backend, chalk ( https://chalk-diagrams.github.io/ ). why? Understanding deep learning code is hard—especially when it's foreign, because it's hard to imagine tensor manipulations, e.g. `F.conv2d(x.unsqueeze(1), w.transpose(-1, -2)).squeeze().view(B, L, -1)` in your head. Printing shapes and tensor values only get me so far. tensordiagram lets me quickly diagram tensors. Other python libraries for creating tensor diagrams are either too physics and math focused, not notebook-friendly, limited to visualizing single tensors, and/or serve a wider purpose (so have a steep learning curve). https://ift.tt/4IdkX59 October 28, 2025 at 03:00AM

Show HN: Action Engine — An API/Agent Buildkit Putting Flexibility First https://ift.tt/R84lmQ3

Show HN: Action Engine — An API/Agent Buildkit Putting Flexibility First https://ift.tt/84pGkWA October 28, 2025 at 01:56AM

Show HN: JSON Query https://ift.tt/LVXKpnZ

Show HN: JSON Query I'm working on a tool that will probably involve querying JSON documents and I'm asking myself how to expose that functionality to my users. I like the power of `jq` and the fact that LLMs are proficient at it, but I find it right out impossible to come up with the right `jq` incantations myself. Has anyone here been in a similar situation? Which tool / language did you end up exposing to your users? https://ift.tt/ilJcHBP October 27, 2025 at 09:52PM

Sunday, October 26, 2025

Show HN: Helium Browser for Android with extensions support, based on Vanadium https://ift.tt/stR3vCD

Show HN: Helium Browser for Android with extensions support, based on Vanadium Been working on an experimental Chromium-based browser that brings 2 major features to your phone/tablet: 1. desktop-style extensions: natively install any extensions (like uBO) from the chrome web store, just toggle "desktop site" in the menu first. 2. privacy/security hardening: applies the full patch sets from Vanadium (with Helium's currently wip). Means you get both browsers' excellent privacy features, like Vanadium's webrtc IP policy option that protects your real IP by default, and security improvements such as JIT being disabled by default, all while being a reasonably efficient FOSS app that can be installed on any (modern) android. It's still in beta, and as I note in the README, it's not a replacement for the full OS-level security model you'd get from running the GrapheneOS Vanadium combo. However, goal was to combine privacy of Vanadium with the power of desktop extensions and Helium features, and make it accessible to a wider audience. (Passkeys from Bitwarden Mobile should also work straight away once merged in the list of FIDO2 privileged browsers) Build scripts are in the repo if you want to compile it yourself. You can find pre-built releases there too. Would love any feedback/support! https://ift.tt/Yw7I5DF October 27, 2025 at 04:11AM

Show HN: The Legal Embedding Benchmark (MLEB) https://ift.tt/DJf5cBK

Show HN: The Legal Embedding Benchmark (MLEB) Hey HN, I'm excited to share the Massive Legal Embedding Benchmark (MLEB) — the first comprehensive benchmark for legal embedding models. Unlike previous legal retrieval datasets, MLEB was created by someone with actual domain expertise (I have a law degree and previously led the AI team at the Attorney-General's Department of Australia). I came up with MLEB while trying to train my own state-of-the-art legal embedding model. I found that there were no good benchmarks for legal information retrieval to evaluate my model on. That led me down a months-long process working alongside my brother to identify or, in many cases, build our own high-quality legal evaluation sets. The final product was 10 datasets spanning multiple jurisdictions (the US, UK, Australia, Singapore, and Ireland), document types (cases, laws, regulations, contracts, and textbooks), and problem types (retrieval, zero-shot classification, and QA), all of which have been vetted for quality, diversity, and utility. For a model to do well at MLEB, it needs to have both extensive legal domain knowledge and strong legal reasoning skills. That is deliberate — given just how important high-quality embeddings are to legal RAG (particularly for reducing hallucinations), we wanted our benchmark to correlate as strongly as possible with real-world usefulness. The dataset we are most proud of is called Australian Tax Guidance Retrieval. It pairs real-life tax questions posed by Australian taxpayers with relevant Australian Government guidance and policy documents. We constructed the dataset by sourcing questions from the Australian Taxation Office's community forum, where Australian taxpayers ask accountants and ATO officials their tax questions. We found that, in most cases, such questions can be answered by reference to government web pages that, for whatever reason, users were unable to find themselves. Accordingly, we manually went through a stratified sample of 112 challenging forum questions and extracted relevant portions of government guidance materials linked to by tax experts that we verified to be correct. What makes the dataset so valuable is that, unlike the vast majority of legal information retrieval evaluation sets currently available, it consists of genuinely challenging real-world user-created questions, rather than artificially constructed queries that, at times, diverge considerably from the types of tasks embedding models are actually used for. Australian Tax Guidance Retrieval is just one of several other evaluation sets that we painstakingly constructed ourselves simply because there weren't any other options. We've contributed everything, including the code used to evaluate models on MLEB, back to the open-source community. Our hope is that MLEB and the datasets within it will hold value long into the future so that others training legal information retrieval models won't have to detour into building their own "MTEB for law". If you'd like to head straight to the leaderboard instead of reading our full announcement, you can find it here: https://ift.tt/9SgvqwO If you're interested in playing around with our model, which happens to be ranked first on MLEB as of 16 October 2025 at least, check out our docs: https://ift.tt/T9fryLB https://ift.tt/ukoseWP October 27, 2025 at 03:46AM

Show HN: MyraOS – My 32-bit operating system in C and ASM (Hack Club project) https://ift.tt/Yc0p8bK

Show HN: MyraOS – My 32-bit operating system in C and ASM (Hack Club project) Hi HN, I’m Dvir, a young developer. Last year, I got rejected after a job interview because I lacked some CPU knowledge. After that, I decided to deepen my understanding in the low level world and learn how things work under the hood. I decided to try and create an OS in C and ASM as a way to broaden my knowledge in this area. This took me on the most interesting ride, where I’ve learned about OS theory and low level programming on a whole new level. I’ve spent hours upon hours, blood and tears, reading different OS theory blogs, learning low level concepts, debugging, testing and working on this project. I started by reading University books and online blogs, while also watching videos. Some sources that helped me out were OSDev Wiki ( https://ift.tt/BdNKV8W ), OSTEP ( https://pages.cs.wisc.edu/~remzi/OSTEP ), open-source repositories like MellOS and LemonOS (more advanced), DoomGeneric, and some friends that have built an OS before. This part was the longest, but also the easiest. I felt like I understood the theory, but still could not connect it into actual code. Sitting down and starting to code was difficult, but I knew that was the next step I needed to take! I began by working on the bootloader, which is optional since you can use a pre-made one (I switched to GRUB later), but implementing it was mainly for learning purposes and to warm up on ASM. These were my steps after that: 1) I started implementing the VGA driver, which gave me the ability to display text. 2) Interrupts - IDT, ISR, IRQ, which signal to the CPU that a certain event occurred and needs handling (such as faults, hardware connected device actions, etc). 3) Keyboard driver, which enables me to display the same text I type on my keyboard. 4) PMM (Physical memory management) 5) Paging and virtual memory management 6) RTC driver - clock addition (which was, in my opinion, optional) 7) PIT driver - Ticks every certain amount of time, and also 8) FS (File System) and physical HDD drivers - for the HDD I chose PATA (HDD communication protocol) for simplicity (SATA is a newer but harder option as well). For the FS I chose EXT2 (The Second Extended FileSystem), which is a foundational linux FS structure introduced in 1993. This FS structure is not the simplest, but is very popular in hobby-OS, it is very supported, easy to set up and upgrade to newer EXT versions, it has a lot of materials online, compared to other options. This was probably the longest and largest feature I had worked on. 9) Syscall support. 10) Libc implementation. 11) Processing and scheduling for multiprocessing. 12) Here I also made a shell to test it all. At this point, I had a working shell, but later decided to go further and add a GUI! I was working on the FS (stage 8), when I heard about Hack Club’s Summer of Making (SoM). This was my first time practicing in HackClub, and I want to express my gratitude and share my enjoyment of participating in it. At first I just wanted to declare the OS as finished after completing the FS, and a bit of other drivers, but because of SoM my perspective was changed completely. Because of the competition, I started to think that I needed to ship a complete OS, with processing, GUI and the bare minimum ability to run Doom. I wanted to show the community in SoM how everything works. Then I worked on it for another 2 months, after finishing the shell, just because of SoM!, totalling my project to almost 7 months of work. At this time I added full GUI support, with dirty rectangles and double buffering, I made a GUI mouse driver, and even made a full Doom port! things I would've never even thought about without participating in SoM. This is my SoM project: https://ift.tt/0rMqIN5 . Every project has challenges, especially in such a low level project. I had to do a lot of debugging while working on this, and it is no easy task. I highly recommend using GDB which helped me debug so many of my problems, especially memory ones. The first major challenge I encountered was during the coding of processes - I realized that a lot of my paging code was completely wrong, poorly tested, and had to be reworked. During this time I was already in the competition and it was difficult keeping up with devlogs and new features while fixing old problems in a code I wrote a few months ago. Some more major problems occurred when trying to run Doom, and unlike the last problem, this was a disaster. I had random PFs and memory problems, one run could work while the next one wouldn’t, and the worst part is that it was only on the Doom, and not on processes I created myself. These issues took a lot of time to figure out. I began to question the Doom code itself, and even thought about giving up on the whole project. After a lot of time spent debugging, I fixed the issues. It was a combination of scheduling issues, Libc issues and the Qemu not having enough (wrongfully assuming 128MB for the whole OS was enough). Finally, I worked throughout all the difficulties, and shipped the project! In the end, the experience working on this project was amazing. I learned a lot, grew and improved as a developer, and I thank SoM for helping to increase my motivation and make the project memorable and unique like I never imagined it would be. The repo is at https://ift.tt/dNvIc24 . I’d love to discuss any aspect of this with you all in the comments! https://ift.tt/dNvIc24 October 27, 2025 at 02:13AM

Show HN: I Built DevTools for Blazor (Like React DevTools but for .NET) https://ift.tt/gLuqbHm

Show HN: I Built DevTools for Blazor (Like React DevTools but for .NET) Hi HN! I've been working on developer tools for Blazor that let you inspect Razor components in the browser, similar to React DevTools or Vue DevTools. The problem: Blazor is Microsoft's frontend framework that lets you write web UIs in C#. It's growing fast but lacks the debugging tools other frameworks have. When your component tree gets complex, you're stuck with Console.WriteLine debugging. What I built: A browser extension + NuGet package that: Shows the Razor component tree in your browser Maps DOM elements back to their source components Highlights components on hover Works with both Blazor Server and WASM How it works: The NuGet package creates shadow copies of your .razor files and injects invisible markers during compilation. These markers survive the Razor→HTML pipeline. The browser extension reads these markers to reconstruct the component tree. Current status: Beta - it works but has rough edges. Found some bugs when testing on larger production apps that I'm working through. All documented on GitHub. Technical challenges solved: Getting markers through the Razor compiler without breaking anything Working around CSS isolation that strips unknown attributes Making it work with both hosting models It's completely open source: https://ift.tt/OnR0TAa Demo site where you can try it: https://ift.tt/QbMJkAB Would love feedback, especially from anyone building production Blazor apps. What debugging pain points do you have that developer tools could solve? https://ift.tt/GtjA7Cp October 26, 2025 at 10:04PM

Saturday, October 25, 2025

Show HN: I created a small 2D game about an ant https://ift.tt/mG1xUjR

Show HN: I created a small 2D game about an ant Hello HN! I created a short game in just a few days, just for fun, where you play as an ant and feed it apples. This game also features random landscape generation, where clouds and trees are randomly distributed across all coordinates (only trees do not grow in the y direction). This is what took me the longest time :) I would appreciate your feedback ^ ^ https://ift.tt/87xT0zg October 26, 2025 at 12:50AM

Show HN: Random Makers – Show HN and Product Hunt, but Faster and Not Corporate https://ift.tt/GxwE571

Show HN: Random Makers – Show HN and Product Hunt, but Faster and Not Corporate https://ift.tt/x83q4rK October 25, 2025 at 11:32PM

Friday, October 24, 2025

Show HN: Wsgrok – one of many ngrok alternatives https://ift.tt/HpLmoeC

Show HN: Wsgrok – one of many ngrok alternatives I built it for myself because ngrok didn't let me add one more domain unless I paid $12 more. I probably should've looked for alternatives before building my own, but my grudge got in the way . Once I started, I wanted to make it better than the other options. Silly, I know. No one probably cares. I plan to open source it sometime next year because I’ve got other projects to finish first. It's free until I deplete my cloud credits, then it will switch to a tier-based model with a free option. https://wsgrok.com October 25, 2025 at 05:16AM

Show HN: Pensive – A bookmarking tool with full-text search and LLMs https://ift.tt/UpsYaMI

Show HN: Pensive – A bookmarking tool with full-text search and LLMs After Pocket shut down, I started working on a new bookmarking solution. My main goal was to make bookmarks searchable, something that Pocket was not good at. So I built Pensive, a bookmarking solution that saves the full page content and makes it searchable with full-text search. You can add pages using a browser extension or a Telegram bot (for saving on mobile). It is written in Go with PostgreSQL, all Dockerized on a $5 Hetzner server, and the front end is built with HTMX. I have also added embeddings (using Gemini Flash Lite) to let LLMs interact with your bookmarks contextually. It is stable enough that I now use it daily. I’m considering publishing it as open source, but first I want to have a proper version ready. Feel free to email me if you’re interested in helping out or if you have prior experience with open source. Feel free to try it at https://getpensive.com https://getpensive.com/ October 25, 2025 at 03:28AM

Show HN: The Σ-Manifold Manifesto https://ift.tt/D42x5sz

Show HN: The Σ-Manifold Manifesto This project explores the connection between *the linear structure of text* and its *emotional-aesthetic impact*. We identify *five fundamental relations* between consecutive sentences — labeled *A–E*. Each represents a shift of *subject-object*, i.e., a transformation of perspective and agency. When texts grow longer, these relations form *sequences* — and from the infinite combinatorial space, *eight stable patterns (Σ₁–Σ₈)* emerge empirically. Each pattern correlates with a distinct *semantic and emotional field* — cathartic, heroic, meditative, humorous, and so on. This allows us to instruct an LLM not through semantic prompts (“write a story about…”), but through *structural commands* — e.g., generate a narrative following sequence Σ₅ (Tragic Counterpoint). You can experiment with these archetypes directly here: [Narrative Generator]( https://ift.tt/SmRZya7... ) or [via python]( https://ift.tt/ULfv684... ) Interestingly, there appears to be a parallel between these textual progressions and *musical harmony*. For example, if A–E are mapped to harmonic functions (I, IV, V, vi, ii), the narrative sequences behave like emotional “chord progressions” — where meaning flows, modulates, and resolves. Coherence in the generated text arises not only from syntax, but from the *associative field* that the LLM constructs around these shifting relations. When asked to “switch subjects,” it spontaneously moves from Poet to Writer , preserving aesthetic continuity rather than randomness. It might even hint at how *children acquire language*: by first sensing the melody of structural transitions, before mapping them to concepts and emotions. Such a method could eventually apply to *training neural systems*, where meaning is learned as flow — not as fixed representation. [Full text]( https://ift.tt/ockTvpx... ) October 25, 2025 at 01:34AM

Show HN: Check what is hogging your disk zpace https://ift.tt/BzojHZX

Show HN: Check what is hogging your disk zpace Hello world! I would like to share that I have created a simple open-source Python CLI app to check what's hogging all the disk space! You can install it with pip. It's like space but Zpace. pip install zpace and find the big files consuming your disk space. Be it apps, virtual environments, machine learning models etc. Just run zpace Once you find one, that you can say "bye" to, just run rm -rf /i_dont_need/this/file/right_now to get rid of it. It was born out of frustration while lack of disk space prevented me to use my laptop properly. It has been very useful to me so far. I hope that it can be useful to you as well. Feel free to check it out. Currently tested on MacOS only https://ift.tt/CkN7Vwt October 25, 2025 at 01:12AM

Thursday, October 23, 2025

Show HN: Git for LLMs – a context management interface https://ift.tt/VSmGoYW

Show HN: Git for LLMs – a context management interface Hi HN, we’re Jamie and Matti, co-founders of Twigg. During our master’s we continually found the same pain points cropping up when using LLMs. The linear nature of typical LLMs interfaces - like ChatGPT and Claude - made it really easy to get lost without any easy way to visualise or navigate your project. Worst of all, none of them are well suited for long term projects. We found ourselves spending days using the same chat, only for it to eventually break. Transferring context from one chat to another is also cumbersome. We decided to build something more intuitive to the ways humans think. We started with two simple ideas. Enabling chat branching for exploring tangents, and an interactive tree diagram to allow for easy visualisation and navigation of your project. Twigg has developed into an interface for context management - like “Git for LLMs”. We believe the input to a model - or the context - is fundamental to its performance. To extract the maximum potential of an LLM, we believe the users need complete control over exactly what context is provided to the model, which you can do using simple features like cut, copy and delete to manipulate your tree. Through Twigg, you can access a variety of LLMs from all the major providers, like ChatGPT, Gemini, Claude, and Grok. Aside from a standard tiered subscription model (free, plus, pro), we also offer a Bring Your Own Key (BYOK) service, where you can plug and play with your own API keys. Our target audience are technical users who use LLMs for large projects on a regular basis. If this sounds like you, please try out Twigg, you can sign up for free at https://twigg.ai/ . We would love to get your feedback! https://twigg.ai October 23, 2025 at 08:42PM

Show HN: Tommy – Turn ESP32 devices into through-wall motion sensors https://ift.tt/LKxYT5N

Show HN: Tommy – Turn ESP32 devices into through-wall motion sensors Hi HN! I would like to present my project called TOMMY, which turns ESP32 devices into motion sensors that work through walls and obstacles using Wi-Fi sensing. TOMMY started as a project for my own use. I was frustrated with motion sensors that didn't detect stationary presence and left dead zones everywhere. Presence sensors existed but were expensive and needed one per room. I explored echo localization first, but microphones listening 24/7 felt too creepy. Then I discovered Wi-Fi sensing - a huge research topic but nothing production-ready yet. It ticked all the boxes: could theoretically detect stationary presence through breathing/micromovements and worked through walls and furniture so devices could be hidden away. Two years and dozens of research papers later, TOMMY has evolved into software I'm honestly quite proud of. Although it doesn't have stationary presence detection yet (coming Q1 2026) it detects motion really well. It works as a Home Assistant Add-on or Docker container, supports a range of ESP32 devices, and can be flashed through the built-in tool or used alongside existing ESPHome setups. I released the first version a couple of months ago on Home Assistant's subreddit and got a lot of interest and positive feedback. More than 200 people joined the Discord community and almost 2,000 downloaded it. Right now TOMMY is in beta, which is completely free for everyone to use. I'm also offering free lifetime licenses to every beta user who joins the Discord channel. You can read more about the project on https://ift.tt/kuWj0E2 . Please join the Discord channel if you are interested in the project. A note on open source: There's been a lot of interest in having TOMMY as an open source project, which I fully understand. I'm reluctant to open source before reaching sustainability, as I'd love to work on this full time. However, privacy is verifiable - it's 100% local with no data collection (easily confirmed via packet sniffing or network isolation). Happy to help anyone verify this. https://ift.tt/kuWj0E2 October 23, 2025 at 10:34PM

Wednesday, October 22, 2025

Show HN: Middlerok – reduces front end-back end integration from weeks to hours https://ift.tt/oA7lGJj

Show HN: Middlerok – reduces front end-back end integration from weeks to hours Generate production-ready OpenAPI specs, frontend & backend code and documentation with AI https://ift.tt/SWoHZNG October 22, 2025 at 11:05PM

Show HN: Incremental JSON parser for streaming LLM tool calls in Ruby https://ift.tt/GUnFxMq

Show HN: Incremental JSON parser for streaming LLM tool calls in Ruby Built this for streaming AI tool calls. LLMs stream function arguments as JSON character-by-character. Most parsers reparse from scratch each time - O(n²) behavior that causes UI lag. This maintains parsing state, processing only new characters. True O(n) performance that stays imperceptible throughout the entire response. Ruby gem, MIT licensed. Would love feedback. https://ift.tt/HntQT5R October 23, 2025 at 01:02AM

Tuesday, October 21, 2025

Show HN: I use ChatGPT these days to develop new features quickly https://ift.tt/KDhS0vo

Show HN: I use ChatGPT these days to develop new features quickly https://ift.tt/4CyfwzE October 22, 2025 at 02:28AM

Show HN: MTOR – A free, local-first PWA to automate workout progression https://ift.tt/ElvjZ6f

Show HN: MTOR – A free, local-first PWA to automate workout progression Hi HN, My motivation for this came from frustration with existing workout trackers. Most felt clunky, hid core features like performance graphs behind a paywall, or forced a native app download. A few people close to me who take their training seriously shared the same sentiment, so I decided to build my own. I'm working on mTOR, a free, science-based workout tracker I built to automate progressive overload. It's a local-first PWA that works completely offline, syncs encrypted between your devices using passwordless passkeys, and allows for plan sharing via a simple link. The core idea is to make progression easier to track and follow. After a workout, it analyzes your performance (weight, reps, and RIR), highlights new personal records (PRs), and generates specific targets for your next session. It also reviews your entire program to provide scientific analysis on weekly volume, frequency, and recovery for each muscle group. This gets displayed visually on an anatomy model to help you learn which muscles are involved, and you can track your gains over time with historical performance charts for each exercise. During a workout, you get a total session timer, an automatic rest timer, and can see your performance from the last session for a clear target to beat. It automatically advances to the next incomplete exercise, and when you need to swap an exercise, it provides context-aware alternatives targeting the same muscles. It's also deeply customizable: * The UI has a dark theme, supports multiple languages (English, Spanish, German), lets you adjust the UI scale, and toggle the visibility of detailed muscle names, exercise types, historical performance badges, and a full history card. * You can set global defaults for weight units (kg/lbs), rest times, and plan targets, or enable/disable metrics like Reps in Reserve (RIR) and estimated 1-Rep Max. The exercise library can be filtered by your available equipment, you can create your own custom exercises with global notes, and there's a built-in weight plate calculator. * The progression system lets you define default rep ranges and RIR targets, or create specific overrides for different lifts (e.g., a 3-5 rep range for strength, 10-15 for accessories). * Editing is flexible: you can drag-and-drop to reorder days, exercises, and sets, duplicate workout days, track unilateral exercises (left/right side), and enter data with a quick wheel picker. I'll be here all day to answer questions. I'm also thinking about making the project open-source down the line and would be curious to hear any thoughts on that. Thanks for checking it out! https://mtor.club/ October 22, 2025 at 12:04AM

Show HN: bbcli – A TUI and CLI to browse BBC News like a hacker https://ift.tt/Yk7H5Rp

Show HN: bbcli – A TUI and CLI to browse BBC News like a hacker hey hn! I (re)built this TUI tool for browsing BBC News in the terminal, it uses an RSS feed for getting headlines and previews and you can read articles too. Try it out and let me know what you think! :) https://ift.tt/ikZK4Yl October 19, 2025 at 04:28PM

Monday, October 20, 2025

Show HN: Online Sourcerer – The best answer to 'source?' https://ift.tt/LtArB0x

Show HN: Online Sourcerer – The best answer to 'source?' Hello, I made this site to combat misinformation on the internet by allowing users to prove that their claim is valid by linking multiple sources and combining them in a single link It's very early stage so I would love feedback on: - What types of claims would be most useful to you? - How can I make verification/sourcing more robust? - Any features that would make this actually useful vs just interesting? Thanks in advance, feel free to roast :) https://ift.tt/IHXR6q2 October 21, 2025 at 03:46AM

Show HN: I created a cross-platform GUI for the JJ VCS (Git compatible) https://ift.tt/ZASN49P

Show HN: I created a cross-platform GUI for the JJ VCS (Git compatible) Personally, I think the JJ VCS ( https://ift.tt/G06Tdyj ) hit a point some time in this past year where I find it hard to find a great reason to continue using git. Over the years I've cobbled together aliases and bash functions to try to improve my git workflow, but after using jj, which works with ~any git repo and integrates great with Github repos, all of the workflow issues I ran into with git are not only solved, but improved in ways I couldn't manage with simple scripts. One example is the op log, which lets you go to any point in your repo's time and provides simple undo and redo commands when you want to back out of a merge, didn't mean to rebase, etc. Because I have a pretty strong conviction that JJ is at this point a cleaner and more powerful version of git, my hopes are that it continues to grow. With that, it seemed a proper full-featured GUI was missing for the VCS. There's some plugins that add some integration into VS Code, and there's one in the works to get Intellij support working, but many of the constructs JJ provides in my opinion necessitate a grounds-up build of a GUI around how JJ works. Right now, Judo for JJ is an MVP in an open beta. I did my best to support all of the core functionality one would need, though there's many nice-to-haves that I am going to add, like native merge support, native splitting, etc. Most of this will be based on feedback from the Beta. I'm really grateful for the great community JJ has built, alongside the HN community itself in the countless VCS-based posts I've read over the years, and am hoping for lots of input here during Beta under real usage - the goal is to be a full-featured desktop GUI for the VCS, similar to many of the great products that are out there for git. https://judojj.com October 20, 2025 at 09:05PM

Show HN: NativeBlend – Text to fully editable 3D Models that don't suck https://news.ycombinator.com/item?id=45647738

Show HN: NativeBlend – Text to fully editable 3D Models that don't suck I'm a developer (not a 3D artist) who's been frustrated with current AI text-to-3D tools — most produce messy, monolithic meshes that are unusable without hours of cleanup. So I built NativeBlend, a side project aimed at generating editable 3D assets that actually fit into a real workflow. Key features: - Semantic Part Segmentation: Outputs separate, meaningful components (e.g., wheels, doors), not just a single mesh blob. - Native Blender Output: Generates clean, structured .blend files with proper hierarchies, editable PBR materials, and decent UVs — no FBX/GLB cleanup required. The goal is to give devs a usable starting point for game assets without the usual AI slop. I have a working demo and would love feedback: Does this solve a real need, or am I just scratching my own itch? Thanks for taking a look! https://native-blend-app.vercel.app/ October 21, 2025 at 12:27AM

Show HN: Smash Balls – Breakout and Vampire Survivors https://ift.tt/cAfznH9

Show HN: Smash Balls – Breakout and Vampire Survivors I made it by 120% viba coding. enjoy! free and no ads. https://ift.tt/WoGX42r October 20, 2025 at 01:26PM

Sunday, October 19, 2025

Show HN: Hokusai Pocket (WIP) – Portable GUIs with MRuby https://ift.tt/oNsr0Ei

Show HN: Hokusai Pocket (WIP) – Portable GUIs with MRuby Whassup?, A couple years ago, I started a project for easily authoring GUIs with Ruby. The project is named Hokusai. It features the ability to compose reactive UI components with events and props, and uses a unique-ish template language. More information on Hokusai can be found here: https://ift.tt/UZFh9XR Since then I've worked on Hokusai Native ( https://ift.tt/Di8WJNA ), which compiles a GraalVM native image / TruffleRuby version of Hokusai that can run / interpret these lil' gui apps. It's quite bloated though, as it has to ship all of truffle ruby + native image and supporting libs. Recently, I applied for a grant to develop a more portable version of this library using MRuby, and got pretty far while waiting for the results. It is named Hokusai Pocket and I consider it to be the final form/approach of this project. I wrote a builder in crystal-lang that embeds the entire Hokusai ruby code as MRuby bytecode, as well as the supporting C code. It can scaffold new projects by building tree-sitter/mruby/raylib, and outputs a binary from a source ruby file. It produce pretty small binaries (~3mb for MacOS) and uses raylib as the rendering engine. For an gif and example of a Hokusai Pocket demo please direct your mouse clicks to this gist: https://gist.github.com/skinnyjames/b510185c6bd83fd4e1a41324... I'd love to hear how this project plays for people. Still working on building for different targets, but android and web should be possible. The project is still undergoing active development, but any help is appreciated. The license is MIT. There also is a discord channel if you want to get help / chat / collaborate: https://ift.tt/by8rzWi _ (^) (_\ |_| \_\ |_| _\_\,/_| (`\(_|`\| (`\,) \ \ \,) | | \__(__| https://ift.tt/WNzCiDn October 20, 2025 at 07:00AM

Show HN: 18yo first iOS app: blocks distracting apps and unlocks with QR/barcode https://ift.tt/C9tTIVl

Show HN: 18yo first iOS app: blocks distracting apps and unlocks with QR/barcode I built Recode because I realized I was spending 8-10 hours a day on my phone pretty consistently. I tried other screen time apps but I found them too easy to bypass and end my blocks whenever I wanted to use an app. My solution was to build an app blocker app that makes users have to scan a physical QR/barcode to take a break from their app blocks. This helped me be able to get my screen time down to just a few hours everyday since I didn't want to physically get up and go across the house to get my barcode. Anyways, since it worked for me I felt like sharing it. App store link: https://ift.tt/zQlo6KS... https://ift.tt/U1dBJaP October 20, 2025 at 03:00AM

Show HN: Jotite – A whimsical Linux Markdown note-taking app https://ift.tt/i7LuMkx

Show HN: Jotite – A whimsical Linux Markdown note-taking app https://ift.tt/aXK1Hs8 October 20, 2025 at 01:32AM

Show HN: WP-Easy, framework to build WordPress themes https://ift.tt/gzfie8R

Show HN: WP-Easy, framework to build WordPress themes The inspiration for this framework came from my brother, an amazing graphic designer who wanted to build WordPress themes using only his FTP-based code editor. He knows HTML and CSS really well, and some jQuery, but not modern JavaScript. In my experience, this is common for people whose jobs are tangential to frontend web development... designers, copywriters, project managers, and backend engineers. So this is for people who don't want to deal with the mess of modern build tools. It tries to nudge people into a more modern direction: component-based architecture, JS modules, SCSS, and template routing. WP-Easy lets people like my brother build professional, modern themes without the usual barriers, just code with your favorite editor and see the results instantly. Key features: 1. File-based routing - Define routes in router.php with Express-like syntax (/work/:slug/) 2. Single File Components - PHP templates with

Saturday, October 18, 2025

Show HN: Odyis: lunar lander (1979) clone written in Rust https://ift.tt/6JfEtR0

Show HN: Odyis: lunar lander (1979) clone written in Rust Moin, to learn Rust I decided to create a simple clone of the original lunar lander game. I would love to hear feedback on the quality of the code! https://ift.tt/YztNIJp October 19, 2025 at 12:27AM

Friday, October 17, 2025

Show HN: OneClickPRD – Save hours vibe coding with concise PRDs https://ift.tt/bjeNqHf

Show HN: OneClickPRD – Save hours vibe coding with concise PRDs Hi HN, I built OneClickPRD because as a solo builder I often wasted hours vibe coding without clear goals. I’d start with an idea, but it was vague, so the code got messy and I had to redo things. OneClickPRD asks you a few questions about your product and then generates a short, structured PRD. The format works well with AI tools like Replit, Lovable, or v0, so you can go from idea to working MVP much faster. Demo: https://ift.tt/vZ7KwCo Would love your feedback: does this feel useful for your projects, and what would make it better? https://ift.tt/vZ7KwCo October 18, 2025 at 02:55AM

Show HN: I turned my resume into a catchy song. It's a game changer https://ift.tt/DSUtf8z

Show HN: I turned my resume into a catchy song. It's a game changer I turned my resume into a catchy pop song. Thought you'd all appreciate it. Worked directly on the Song Style prompt, which you can duplicate for your own fun catchy resume song. Just replace the lyrics! https://ift.tt/IBY4jv3 October 18, 2025 at 02:22AM

Show HN: We packaged an MCP server inside Chromium https://ift.tt/eaYPknb

Show HN: We packaged an MCP server inside Chromium Hey HN, we just shipped a browser with an inbuilt MCP server! We're a YC startup (S24) building BrowserOS — an open‑source Chromium fork. We're a privacy‑first alternative to the new wave of AI browsers like Dia, Perplexity Comet. Since launching ~3 months ago, the #1 request has been to expose our browser as an MCP server. -- Google beat us to launch with chrome-devtools-mcp (solid product btw), which lets you build/debug web apps by connecting Chrome to coding assistants. But we wanted to take this a step further: we packaged the MCP server directly into our browser binary. That gives three advantages: 1. MCP server setup is super simple — no npx install, no starting Chrome with CDP flags, you just download the BrowserOS binary. 2. with our browser's inbuilt MCP server, AI agents can interact using your logged‑in sessions (unlike chrome-devtools-mcp which starts a fresh headless instance each time) 3. our MCP server also exposes new APIs from Chromium's C++ core to click, type, and draw bounding boxes on a webpage. Our APIs are also not CDP-based (Chrome Debug Protocol) and have robust anti-bot detection. -- Few example use cases for BrowserOS-mcp are: a) *Frontend development with Claude Code*: instead of screenshot‑pasting, claude-code gets WYSIWYG access. It can write code, take a screenshot, check console logs, and fix issues in one agentic sweep. Since it has your sessions, it can do QA stuff like "test the auth flow with my Google Sign‑In." Here's a video of claude-code using browserOS to improve the css styling with back-and-forth checking: https://youtu.be/vcSxzIIkg_0 b) *Use as an agentic browser:* You can install BrowserOS-mcp in claude-code or Claude Desktop and do things like form-filling, extraction, multi-step agentic tasks, etc. It honestly works better than Perplexity Comet! Here's a video of claude-code opening top 5 hacker news posts and summarizing: https://youtu.be/rPFx_Btajj0 -- *How we packaged MCP server inside Chromium binary*: We package the server as a Bun binary and expose MCP tools over HTTP instead of stdio (to support multiple sessions). And we have a BrowserOS controller installed as an extension at the application layer which the MCP server connects to over WebSocket to control the browser. Here's a rough architecture diagram: https://dub.sh/browseros-mcp-diag -- *How to install and use it:* We put together a short guide here: https://ift.tt/ZMTzLIB Our vision is to reimagine the browser as an operating system for AI agents, and packaging an MCP server directly into it is a big unlock for that! I'll be hanging around all day, would love to get your feedback and answer any questions! https://ift.tt/PyiUjo7 October 17, 2025 at 09:52PM

Thursday, October 16, 2025

Show HN: Arky – Visual 2D Markdown editor (access codes below) https://ift.tt/0n2QumZ

Show HN: Arky – Visual 2D Markdown editor (access codes below) Hey HN! Arky is a markdown editor with a twist — instead of writing in a linear doc, you work on a 2D canvas where you can: • Place ideas anywhere spatially • Organize them into hierarchy with drag & drop • See the full document structure at a glance • AI writes contextually — drag & drop responses anywhere on canvas More info: https://arky.so/ Try it: https://app.arky.so/ (Access codes are in the comment below!) Quick demo(60s): https://youtu.be/Nxsr5Ag2vEM?si=g6nKheRWWNuaLTe8 Would love to hear your thoughts! https://app.arky.so October 17, 2025 at 01:20AM

Show HN: Inkeep (YC W23) – Agent builder that works both visually and in code https://ift.tt/FLbX2Wp

Show HN: Inkeep (YC W23) – Agent builder that works both visually and in code Hi HN! I'm Nick from Inkeep. We built an agent builder with true 2-way sync between code and a drag-and-drop visual editor, so devs and non-devs can collaborate on the same agents. Here’s a demo video: https://ift.tt/3ibDcTt . As a developer, the flow is: 1) Build AI Chat Assistants or AI Workflows with the TypeScript SDK 2) Run `inkeep push` from your CLI to publish 3)Edit agents in the visual builder (or hand off to non-technical teams) 4) Run `inkeep pull to edit in code again. We built this because we wanted the accessibility of no-code workflow builders (n8n, Zapier), but the flexibility and devex of code-based agent frameworks (LangGraph, Mastra). We also wanted first-class support for chat assistants with interactive UIs, not just workflows. OpenAI got close, but you can only do a one-time export from visual builder to code and there’s vendor lock-in. How I've used it: I bootstrapped a few agents for our marketing and sales teams, then was able to hand off so they can maintain and create their own agents. This has enabled us to adopt agents across technical and non-technical roles in our company on a single platform. To try it, here’s the quickstart: https://ift.tt/gGjXvfT . We leaned on open protocols to make it easy to use agents anywhere: An MCP endpoint, so agents can be used from Cursor/Claude/ChatGPT A Chat UI library with interactive elements you can customize in React An API endpoint compatible with the Vercel AI SDK `useChat` hook Support for Agent2Agent (A2A) so they work with other agent ecosystems We made some practical templates like a customer_support, deep_research, and docs_assistant. Deployment is easy with Vercel/Docker with a fair-code license and there's a traces UI and OTEL logs for observability. Under the hood, we went all-in on a multi-agent architecture. Agents are made up of LLMs, MCPs, and agent-to-agent relationships. We’ve found this approach to be easier to maintain and more flexible than traditional “if/else” approaches for complex workflows. The interoperability works because the SDK and visual builder share a common underlying representation, and the Inkeep CLI bridges it with a mix of LLMs and TypeScript syntactic sugar. Details in our docs: https://docs.inkeep.com . We’re open to ideas and contributions! And would love to hear about your experience building agents - what works, hasn’t worked, what’s promising? https://ift.tt/jZvIEx8 October 16, 2025 at 06:20PM

Show HN: Coordable – Get better geocoding results with AI cleaning and analytics https://ift.tt/K6MDBeq

Show HN: Coordable – Get better geocoding results with AI cleaning and analytics I’ve been working on a tool called Coordable, which helps analyze and improve geocoding results. If you’ve ever dealt with geocoding at scale, you’ve probably hit two recurring problems: Garbage in = garbage out. Addresses are often messy (“2nd floor”, “/”, abbreviations, multiple addresses in one line…). Most geocoders will fail or return incorrect matches if the input isn’t perfectly normalized. A result isn’t always a correct result. Many providers return something even if it’s wrong — e.g. shifting a house number, or confusing similar street names. Assessing whether a geocoded result is actually right is surprisingly hard to automate. Coordable tries to address both issues with AI and analytics: Uses an LLM-based cleaner to normalize messy addresses (multi-country support). Automatically evaluates geocoding accuracy by comparing input and output like a human would. Lets you benchmark multiple providers (Google, HERE, Mapbox, Census, BAN, etc.) side by side. Includes a dashboard to visualize results, quality metrics, and exports. It’s not a new geocoder — it wraps existing APIs and focuses on data quality, comparison, and automation. It’s currently in beta with free credits. If you work with geocoding or address data, I’d love to hear how you handle these challenges and what kind of analytics would be most useful to you. https://coordable.co/ October 16, 2025 at 11:11PM

Wednesday, October 15, 2025

Show HN: Shorter – search for shorter versions of your domain https://ift.tt/j79ohwY

Show HN: Shorter – search for shorter versions of your domain https://shorter.dev October 16, 2025 at 07:29AM

Show HN: Achilleus – Security monitoring for agencies managing client websites https://ift.tt/g8TZXn5

Show HN: Achilleus – Security monitoring for agencies managing client websites Most security tools are either free-but-limited (SSL Labs) or enterprise-priced ($200+/month). Nothing existed for freelancers and small agencies managing multiple sites affordably. So I built it. It scans all your domains in ~30 seconds, shows you a simple security score, flags issues, and generates professional PDFs you can send to clients. No complex setup, no security expertise required. Currently $27/month for 10 domains with unlimited scans. The MVP is live and working well, but I want real feedback before pushing hard on growth. Looking for beta users—especially freelancers or small agency owners managing 5+ client sites. If you're interested, I'd love to hear what works and what doesn't: https://achilleus.so Happy to answer questions in the comments. https://ift.tt/mpv9aqk October 16, 2025 at 07:34AM

Show HN: Specific (YC F25) – Build backends with specifications instead of code https://ift.tt/4R2t3Tj

Show HN: Specific (YC F25) – Build backends with specifications instead of code Hi folks! Iman and I (Fabian) have been building Specific for a while now and are finally opening up our public beta. Specific is a platform for building backend APIs and services entirely through natural-language specifications and tests, without writing code. We then automatically turn your specs into a working system and deploy it for you, along with any infrastructure needed. We know a lot of developers who have already adopted spec-driven development to focus on high-level design and let coding agents take care of implementation. We are attempting to take this even further by making the specs themselves the source of truth. Of course, we can’t blindly trust coding agents to follow the spec, so we also support adding tests that will run to ensure the system behaves as expected and to avoid regressions. There is so much ground to cover, so we are focusing on a smaller set of initial features that in our experience should cover a large portion of backends: - An HTTP server for each project. Authentication can be added by simply stating in the spec how you want to protect your endpoint. - A database automatically spun up and schema configured if the spec indicates persistence is needed. - External APIs can be called. You can even link out to API docs in your specs. You currently can’t see the generated code, but we are working on enabling it. Of course, we don’t claim any ownership of the generated code and will gladly let you export it and continue building elsewhere. Specific is free to try and we are really eager to hear your feedback on it! Try it here: https://ift.tt/QAJkVCO https://specific.dev/ October 15, 2025 at 10:51PM

Show HN: Pxxl App – A Nigerian Alternative to Vercel, Render, and Netlify https://ift.tt/ldObz1u

Show HN: Pxxl App – A Nigerian Alternative to Vercel, Render, and Netlify Hi HN, I built Pxxl App — a free web hosting and deployment platform for developers in Nigeria and beyond. It’s a Nigerian alternative to Vercel, Render, and Netlify, designed for those who want a simple, fast, and barrier-free way to host both frontend and backend apps. With Pxxl App, you can connect your Git repo and deploy in seconds — no credit card, no limits. You’ll get a live subdomain like yourapp.pxxl.pro, automatic builds, and continuous deployment. It supports: • Frontend frameworks: React, Next.js, Vue, Svelte, and more • Backend projects: Node.js, PHP, and Python • Features like environment variables, CI/CD, and instant rollback The goal is to make cloud deployment accessible to African and global developers without the typical payment or region restrictions. It’s completely free to start, and I’d love to hear feedback from the HN community on how to improve it or what features you’d want next. Check it out: https://pxxl.app https://pxxl.app October 15, 2025 at 11:55PM

Tuesday, October 14, 2025

Show HN: An open source access logs analytics script to block bot attacks https://ift.tt/H2J7nkq

Show HN: An open source access logs analytics script to block bot attacks https://ift.tt/AXWGLMa October 15, 2025 at 12:45AM

Show HN: GoHPTS-TCP/UDP Transparent Proxy with ARP Spoofing and Traffic Sniffing https://ift.tt/ZmgtTae

Show HN: GoHPTS-TCP/UDP Transparent Proxy with ARP Spoofing and Traffic Sniffing https://ift.tt/9Yy1whv October 14, 2025 at 08:19PM

Show HN:I built a free AI tool that scans and sorts financial news for traders https://ift.tt/csomtKN

Show HN:I built a free AI tool that scans and sorts financial news for traders https://www.fxradar.live/ October 14, 2025 at 10:56PM

Show HN: Ark v0.6.0 – Go ECS with new declarative event system https://ift.tt/NbGYyHB

Show HN: Ark v0.6.0 – Go ECS with new declarative event system Ark is a high-performance Entity Component System (ECS) library for Go. Ark v0.6.0 introduces a new event system built around lightweight, composable observers. These allow applications to react to ECS lifecycle changes like entity creation/removal, component updates, relation changes using declarative filters and callbacks. Observers follow the same patterns as Ark’s query system, making them easy to integrate and reason about. Custom events are also supported. They can be emitted manually and observed with the same filtering logic, making them ideal for modeling domain-specific interactions such as input handling, and other reactive game logic. As a new performance-related feature, filters and queries are now concurrency-safe and can be executed in parallel. This release also includes a load of performance improvements, from faster archetype switching over optimized query and table creation to improved performance of bitmask operations. The new World.Shrink method helps reclaim unused memory in dynamic workloads. Docs have been expanded with a full guide to the event system, examples for both built-in and custom events, and an Ebiten integration example. A cheat sheet for common operations has been added. Finally, Ark now has 100% test coverage. Changelog: https://ift.tt/cZQ2HiN Repo: https://ift.tt/fXTDbva Would love feedback from anyone building games, simulations, or ECS tooling in Go. https://ift.tt/fXTDbva October 14, 2025 at 12:34PM

Monday, October 13, 2025

Show HN: Wordle but you have to predict your score before playing https://ift.tt/PFklp8W

Show HN: Wordle but you have to predict your score before playing If you want to compare your results feel free to join the HN team. However you can also play without signing up for an account. Its a SvelteKit application running on Cloudflare Workers. Would love some feedback on the idea and execution! https://ift.tt/N8ngJbQ October 14, 2025 at 08:56AM

Show HN: AI Toy I worked on is in stores https://ift.tt/6qIN0fl

Show HN: AI Toy I worked on is in stores Alt link: https://ift.tt/1DGEMdZ Video demo: https://www.youtube.com/watch?v=0z7QJxZWFQg The first time I talked with AI santa and it responded with a joke I was HOOKED. The fun/nonsense doesn't click until you try it yourself. What's even more exciting is you can build it yourself: libpeer: https://ift.tt/3uGq2C7 pion: https://ift.tt/QYRbdTs Then go do all your fun logic in your Pion server. Connect to any Voice AI provider, or roll your own via Open Source. Anything is possible. If you have questions or hit any roadblocks I would love to help you. I have lots of hardware snippets on my GitHub: https://ift.tt/oB7klsD . https://ift.tt/efQuanW October 12, 2025 at 07:45PM

Sunday, October 12, 2025

Show HN: Promptlet – Mac app to help you stop typing "ultrathink" over and over https://ift.tt/KSDsqyv

Show HN: Promptlet – Mac app to help you stop typing "ultrathink" over and over https://ift.tt/iRynIck October 13, 2025 at 02:29AM

Show HN: Pyreqwest – Powerful and fast Rust-reqwest based HTTP client for Python https://ift.tt/V6YROPN

Show HN: Pyreqwest – Powerful and fast Rust-reqwest based HTTP client for Python Python has lacked a batteries-included HTTP library that would have both async and sync clients. Httpx (httpcore), which has offered this, is unfortunately pretty much unmaintained and suffering from huge perf issues ( https://ift.tt/Usq4Kjy ). I built pyreqwest HTTP client for Python that is fully Rust based on top of reqwest. It includes all features reqwest offers plus some more. Also including unit testing utilities (mocking, ASGI app support). Go check https://ift.tt/tD3wxv9 :) https://ift.tt/tD3wxv9 October 13, 2025 at 01:17AM

Show HN: I built a simple ambient sound app with no ads or subscriptions https://ift.tt/QbWEPlJ

Show HN: I built a simple ambient sound app with no ads or subscriptions I’ve always liked having background noise while working or falling asleep, but I got frustrated that most “white noise” or ambient sound apps are either paywalled, stuffed with ads, or try to upsell subscriptions for basic features. So I made Ambi, a small iOS app with a clean interface and a set of freely available ambient sounds — rain, waves, wind, birds, that sort of thing. You can mix them, adjust volume levels, and just let it play all night or while you work. Everything works offline and there are no hidden catches. It’s something I built for myself first, but I figured others might find it useful too. Feedback, bugs, and suggestions are all welcome. https://ift.tt/QZkAjUs... https://ambisounds.app/ October 12, 2025 at 08:19PM

Saturday, October 11, 2025

Show HN: Solving the cluster 1 problem with vCluster standalone https://ift.tt/HrxtRDk

Show HN: Solving the cluster 1 problem with vCluster standalone vcluster is an open source tool for Kubernetes multi tenancy and over the years it has matured to have hosted controlplane virtual cluster, shared virtual clusters but the host cluster problem was always there. With vcluster standalone, you can now create the first cluster also with the same developer experience and consolidate the multiple vendor problem. With this, you can now use vcluster for entire multi tenancy spectrum. Feel free to discuss, happy to answer any questuons. https://ift.tt/UQX0rGb October 8, 2025 at 10:20PM

Show HN: Sprite Garden - HTML Canvas 2D sandbox and farming https://ift.tt/P19qxuD

Show HN: Sprite Garden - HTML Canvas 2D sandbox and farming Sprite Garden: https://kherrick.github.io/sprite-garden/ A 2D sandbox exploration and farming game that runs entirely in the web browser. As a fully HTML, CSS, and JavaScript game, it is highly readable, hackable, and customizable. Included on "globalThis" is the "spriteGarden" global object with the game config and state readily available. Drawing with tiles is as easy as opening dev tools (use the menu in the browser as keyboard is captured), or entering the "Konami Code," for a full screen view and a map editor. - Share games from the world state manager - Explore unique procedurally generated biomes - Dig for resources like coal, iron, and gold - Use collected materials to place blocks and shape the world - Discover underground cave systems filled with resources - Plant and harvest different crops with "realistic" growth cycles Examples: - Preparing a QR Code to be mined: https://gist.github.com/kherrick/1191ae457e1f6e1a65031c38c2d... - Drawing a heart in the sky: https://gist.github.com/kherrick/3dc9af05bccc126b11cc26fb30a... - Entering the Konami Code (map editor / fullscreen): https://gist.github.com/kherrick/effbe1463d9b78da046f27c5d42... I'm unsure how the game should be taken further, or whether it should progress. Some potential ideas for the future include: - Input Box with JS Execution: Provide a safe, sandboxed input area in the game's UI where players can write small JS functions or scripts (instead of exposing it on globalThis). - API Exposure: Expose a controlled API or object representing game state and functions, like terrain manipulation, crop growth, player movement, to the user script so players can automate or modify behaviors. - Event Hooks: Allow players to register hooks into game events (e.g., world update, planting crops) where their custom code runs, enabling mods or custom automation. - Multiplayer: Use WebRTC to allow many players in the same world. - Actual gamification: Make reasons to play, health meter, powerups, plant combinations, enemies? - Better mobile controls: Currently on screen, no swiping for movement. - Easier building with blocks: Currently block position based on location of player. Also featured on: - Microsoft Store: https://ift.tt/ut48Uyk - Wayback Machine: https://ift.tt/SE5c28t.... Feedback is highly welcome, and source is available at: https://ift.tt/1DL7ikU https://kherrick.github.io/sprite-garden/ October 12, 2025 at 03:15AM

Friday, October 10, 2025

Show HN: Praxos – Webhooks for Your Life https://ift.tt/1Rcakjn

Show HN: Praxos – Webhooks for Your Life Hello HN, Lucas and Soheil here from Praxos ( https://mypraxos.com/ )! We’ve been working on an AI personal assistant for a while now, and today we're sharing about our new webhooks feature with you. You can now add webhooks and triggers by asking Praxos over text or voice to set them up for you. Webhooks listen for conditions that, when they happen, trigger another action. Webhooks can execute one time or be indefinite. They can execute any action currently supported by Praxos. They are implemented for email and calendar (Gmail, Outlook), Notion, Slack, Discord, Trello, Dropbox, Drive, iMessage, Whatsapp and Telegram. Examples include: –"When a new task is added to my Trello ‘Urgent’ list, create a two-hour block on my Google Calendar within my next work window, and send me a reminder as soon as it happens. " [this ties nicely to the next one] –"If my calendar says I’m in a focus block, auto-reply on Slack saying I’m working on the latest Trello Urgent task." –“When a meeting transcript is ready from Fireflies and lands on my email, extract decisions and next steps, then post a 5-bullet summary in #team-updates on Slack” [ties to next point]. –“When a meeting transcript from work finishes processing, summarize it, and post key decisions to Slack. But delay the notification until after my kid’s bedtime. If the summary includes tasks due tomorrow, block off my calendar in the morning and text me a reminder after I wake up.” –"My goal is to read 12 books this year. Every month, send me a list of 10 books and their links to Goodreads based on what I like. Every Saturday morning, ask me how far along I am with my reading. If you find I purchased a book (i.e.: from checking my email) then add it to my reading list, and ask me if I've started reading it." –"Every time I get a receipt from Uber Eats or Doordash, extract the date, bill, and meal and add those to my personal finances spreadsheet on Google Sheets." –“When a new transaction appears in my bank email or statement, extract the merchant, category, and amount, then log it in my ‘Spending Tracker’ Google Sheet. If total monthly spend crosses my budget limit, post a summary to my private WhatsApp chat with the top three categories that caused it.” –"Remind me to pay my credit card and bills each time a new email comes in. Ask me if I have done so one date later if I don't tell you I already have done this." –“At month-end, compile all financial logs from Sheets, receipts, and transcripts of my finance calls, then generate a single ‘Monthly Snapshot’ PDF. If my savings rate improved, add a green badge, and congratulate me. If it worsened, send me a summary with trends.” –"Every Sunday, check my latest additions to Google Photos and send a curation to my mom and grandma over Whatsapp." –"Review user feedback on the Praxos Discord channel and add it to our User Feedback page on Notion. Keep a counter for repeat requests." –"If I receive an email from Lucas, and I'm coding, respond to him and tell him I'm busy. Also remind him to check what I'm up to on Trello." Curious? Try it out for free for 7 days at https://ift.tt/JxMi5aj ! October 11, 2025 at 12:00AM

Show HN: Multiple choice video webgame experiment https://ift.tt/VezWH4T

Show HN: Multiple choice video webgame experiment Hey all, just wanted to share a little game experiment. It's a rooms & keys kind of adventure with a lot of random deaths. It plays a Veo3-generated video in response to clicks, with Gemini used for coding. Prompting the videos was fun, but trying to vibe code everything was not. In the future I'll go back to using LLMs more sparingly for isolated functions, or at least try not to have it create anything that requires seeing the output to debug. https://ift.tt/sVFEQHh October 10, 2025 at 10:39PM

Show HN: Iframetest.com https://ift.tt/P4iOsvX

Show HN: Iframetest.com https://iframetest.com/ October 6, 2025 at 03:25PM

Thursday, October 9, 2025

Show HN: Open-Source Voice AI Badge Powered by ESP32+WebRTC https://ift.tt/iL7u2CW

Show HN: Open-Source Voice AI Badge Powered by ESP32+WebRTC hi! video[0] The idea is you could carry around this hardware and ask it any questions about the conference. Who is speaking, what are they speaking about etc... it connects via WebRTC to a LLM and you get a bunch of info. This is a workshop/demo project I did for a conference. When I was talking to the organizers I mentioned that I enjoy doing hardware + WebRTC projects. They thought that was cool and so we ran with it. I have been doing these ESP32 + voice ai projects for a bit now. Started with an embedded sdk for livekit[1] that jul 2024 and been noodling with it since then. This code then found its way into pipecat/livekit etc... So I hope it inspires you to go build with hardware and webrtc. It's a REALLY fun space right now. Lots of different cheap microcontrollers and even more cool projects. [0] https://www.youtube.com/watch?v=gPuNpaL9ig8 [1] https://ift.tt/VJARHqK https://ift.tt/pnzcEyu October 10, 2025 at 02:25AM

Show HN: Created macOS app to help you keep your distance from your screen https://ift.tt/RITlsb0

Show HN: Created macOS app to help you keep your distance from your screen Hey everyone, If you're anything like me, you spend a good chunk of your day (and night) on your computer. I often find that when I'm zoned in, my posture gets worse and worse and my face ends up being really close to the screen. And over a course of a workday, when I finally unplug, my eyes have a hard time focusing on things that are far away. This has become a big enough problem for me that I decided to create an app to help me keep my face far enough from the screen. Now, I could've gone with a simple notification with a timer built into it but, as with all reminder notification, they soon become noise for me and I end up just dismissing it. I needed something to actively force me to move back. Which is where FarSight comes in. It uses your camera to gauge your distance and blurs the entire screen if it detects that you are getting close enough for a certain period of time. I made it so that it won't be extremely annoying and disruptive (e.g. blurring the screen every time you cross the line) but just enough of a nuisance to be helpful. I've been using it everyday since creating it and it's definitely helped me with eye strain, double vision, and surprisingly, my posture as well. I'm not sure if I'll keep it free forever but I wanted to release it first to ask for feedback. I only have the app in MacOS so if it has enough interest, I'll invest into making Windows counterpart. https://ift.tt/45fepBr... Also, in case anyone is wondering, no data is collected and the snapshots during the app's usage are not saved but only used to calculate the distance. October 10, 2025 at 01:57AM

Wednesday, October 8, 2025

Show HN: Spica – OSS Tool to Generate Infinite Length Sora-2 Videos https://ift.tt/khVWpoz

Show HN: Spica – OSS Tool to Generate Infinite Length Sora-2 Videos https://ift.tt/O7pmuyS October 9, 2025 at 12:04AM

Show HN: KI Song Erstellen Kostenlos – AI Music Generator FüR Deutsche Musik https://ift.tt/BVtwqv4

Show HN: KI Song Erstellen Kostenlos – AI Music Generator FüR Deutsche Musik Kostenloser KI-Musikgenerator für deutsche Songs. Text rein → professioneller Song in wenigen Minuten. Gebaut für Content Creator, die Copyright-freie Musik brauchen. https://ift.tt/UT7QzBO GitHub: https://ift.tt/bgTrk19 Probiert es aus! https://ift.tt/UT7QzBO October 8, 2025 at 10:26PM

Tuesday, October 7, 2025

Show HN: Agentic Design Patterns – Python Edition, from the Codex Codebase https://ift.tt/hIUK7va

Show HN: Agentic Design Patterns – Python Edition, from the Codex Codebase While reading Agentic Design Patterns by Antonio Gulli, I wanted to see how these patterns look in real code. I cloned the OpenAI Codex repo (the open-source AI coding assistant that recently trended on HN) — but it was in Rust. So, I used an Cursor to help me extract and translate 18+ agentic patterns from Codex’s codebase into Python. That small experiment turned into a full open-source guide: GitHub: Codex Agentic Patterns https://ift.tt/9upZHC7 Each pattern comes with: A short explanation and code sample A runnable exercise and agent snippet A summary of how Codex used the pattern (e.g., prompt chaining, tool orchestration, reflection loops, sandbox escalation) One full working Python agent that ties it all together If you’ve read the agentic design patterns book or explored Codex, this is a bridge between theory and practice — focused on runnable, open examples instead of abstract diagrams. It’s completely free and open-source. Would love feedback, ideas, or even new patterns from your own agent experiments. https://artvandelay.github.io/codex-agentic-patterns/ October 8, 2025 at 04:11AM

Show HN: DidMySettingsChange – A tool that checks changed windows settings https://ift.tt/t6SViur

Show HN: DidMySettingsChange – A tool that checks changed windows settings Microsoft has been under heavy scrutiny with how they manage Windows over the years, particularly concerning privacy and telemetry settings. Many users find that after disabling certain settings, these settings are mysteriously re-enabled after updates or without any apparent reason. DidMySettingsChange is a Python script designed to help users keep track of their Windows privacy and telemetry settings, ensuring that they stay in control of their privacy without the hassle of manually checking each setting. Features Comprehensive Checks: Automatically scans all known Windows privacy and telemetry settings. Change Detection: Alerts users if any settings have been changed from their preferred state. Customizable Configuration: Allows users to specify which settings to monitor. Easy to Use: Simple command-line interface that provides clear and concise output. Logs and Reports: Generates detailed logs and reports for auditing and troubleshooting. https://ift.tt/3bKGAHD October 6, 2025 at 04:19AM

Show HN: I'm building a browser for reverse engineers https://ift.tt/tyQYdjA

Show HN: I'm building a browser for reverse engineers https://ift.tt/b2yRU5C October 6, 2025 at 09:02PM

Show HN: Gotask, a simple task manager CLI built using Golang https://ift.tt/ZA9jVzt

Show HN: Gotask, a simple task manager CLI built using Golang Hey folks, Gotask is a simple golang CLI I made to explore some aspects of the Go programming language. https://ift.tt/lgN6uwI October 8, 2025 at 12:20AM

Monday, October 6, 2025

Show HN: TinqerJS– LINQ-inspired QueryBuilder for TypeScript + Postgres/SQLite https://ift.tt/7i8JCWk

Show HN: TinqerJS– LINQ-inspired QueryBuilder for TypeScript + Postgres/SQLite https://tinqerjs.org October 6, 2025 at 08:58PM

Show HN: I've build a platform for writing technical/scientific documents https://ift.tt/hucG4KP

Show HN: I've build a platform for writing technical/scientific documents https://ift.tt/0beoYHD October 6, 2025 at 04:28PM

Show HN: I Built a Transcription CLI Because Uploading 4GB Videos Was Killing Me https://ift.tt/hAY9QL2

Show HN: I Built a Transcription CLI Because Uploading 4GB Videos Was Killing Me https://ift.tt/bRZSUvg October 6, 2025 at 11:52PM

Show HN: Volant– spin up real microVMs in 10 seconds(Docker images or initramfs) https://ift.tt/EgG5ATX

Show HN: Volant– spin up real microVMs in 10 seconds(Docker images or initramfs) I’ve been building Volant, a modular microVM orchestration engine that makes running microVMs feel as simple as Docker. It supports cloud-init, GPU/VFIO passthrough (yes, you can run AI/ML workloads in isolated microVMs), booting Docker images via a plugin system, and Kubernetes-style deployments with replication, all from a single CLI(soon to be web UI, see next) Coming soon: a built-in PaaS mode with snapshot-based cold start elimination, sort of like Dokploy, but designed for serverless workloads that boot from memory snapshots instead of containers. Volant is intentionally a bit opinionated to make microVMs more accessible, but it’s fully extensible for power users. Check out the README and the docs for more details. It’s free and open source (under BSL), would love to hear feedback or thoughts from anyone! tl;dr: 6-second GIF in the README shows the full flow: install → create VM → get HTTP 200. https://ift.tt/p3AQNum October 6, 2025 at 04:24AM

Sunday, October 5, 2025

Show HN: A Node.js CLI tool to generate ai.txt, llms.txt, robots.txt, humans.txt https://ift.tt/OfDnGeR

Show HN: A Node.js CLI tool to generate ai.txt, llms.txt, robots.txt, humans.txt https://ift.tt/oNZKUrc October 6, 2025 at 09:28AM

Show HN: High-fidelity, compact, and real time rendering of university campus https://ift.tt/ExjboKt

Show HN: High-fidelity, compact, and real time rendering of university campus Technical thread: https://ift.tt/X8UBZ4n https://hoanh.space/aalto/ October 6, 2025 at 05:21AM

Saturday, October 4, 2025

Show HN: An open-source, RL-native observability framework we've been missing https://ift.tt/ietSHwr

Show HN: An open-source, RL-native observability framework we've been missing The RL ecosystem is maturing— verifiers are standardizing how we build and share environments. However, as it grows, we need observability tooling that actually understands RL primitives. Running RL experiments without visibility into rollout quality, reward distributions, or failure modes is a waste of time. Monitor provides live tracking, per-example inspection, and programmatic access—see what's happening during runs and debug what went wrong afterward. https://ift.tt/0Lz1VIO October 5, 2025 at 03:05AM

Show HN: World Amazing Framework: Like Django for Civilization https://ift.tt/cBZgEuj

Show HN: World Amazing Framework: Like Django for Civilization Any initial thoughts? This framework is meant to be a tool for construction, so if you want to play around with it for creating potential specific implementations, you can drop the contents of the website, the GitHub README, and the entire overview.md into an AI chat, and that should be enough to use the framework, at least conceptually. Would y'all want me to pre-prime a chat in Google AI Studio with the full context of the plan and some basic direction for discourse? I can share a link to a ready-to-go environment. The core documentation should answer most mechanical questions. And if you feed the docs into an AI chat, you can ask it any question you may have, or to simply ask it to explain something in different ways, or hypothesize solutions to any world issue, either systemic or regional. Gemini Pro 2.5 can take the full doc in one prompt, and its ability to co-create ideas is remarkable. I've been using it mostly through the AI Studio interface. Much of the overview is as much my work as it is a synthesis of my collaboration with Gemini Pro 2.5, ChatGPT-4o, and some early contributions from GPT-4 about a year ago. Before LLMs, I was building out pamphlet-style pages on a website (that are up at whomanatee.org, which is the base wrapper implementation of the framework), and I was planning to use them as talking points. I was anticipating that much of the deep thinking would have to happen in slow, public discourse. With LLMs, I've been able to stress-test these ideas from every possible angle, using any past event or theory to see if the framework could withstand scrutiny. At one point, a model argued that Adam Smith would have rejected this idea as fantasy. So I worked with it to develop an economic plan that "synthetic Adam" praised. It's incredible that we now have the ability to get synthesized thoughts from almost any perspective. You could ask it, "What would Barack Obama think of this plan? And using the framework, what would be your response to any hesitations he may have?" And it responds with incredible analysis, synthesis, and feedback. https://ift.tt/8QzGKWh October 5, 2025 at 03:44AM

Show HN: Run – a CLI universal code runner I built while learning Rust https://ift.tt/sM0NdnZ

Show HN: Run – a CLI universal code runner I built while learning Rust Hi HN — I’m learning Rust and decided to build a universal CLI for running code in many languages. The tool, Run, aims to be a single, minimal dependency utility for: running one-off snippets (from CLI flags), running files, reading and executing piped stdin, and providing language-specific REPLs that you can switch between interactively. I designed it to support both interpreted languages (Python, JS, Ruby, etc.) and compiled languages (Rust, Go, C/C++). It detects languages from flags or file extensions, can compile temporary files for compiled languages, and exposes a unified REPL experience with commands like :help, :lang, and :quit. Install: cargo install run-kit (or use the platform downloads on GitHub). Source & releases: https://ift.tt/TDCoZ2l I used Rust while following the official learning resources and used AI to speed up development, so I expect there are bugs and rough edges. I’d love feedback on: usability and UX of the REPL, edge cases for piping input to language runtimes, security considerations (sandboxing/resource limits), packaging and cross-platform distribution. Thanks — I’ll try to answer questions and share design notes. https://ift.tt/TDCoZ2l October 5, 2025 at 12:04AM

Friday, October 3, 2025

Show HN: Beacon (open source) – Built after AWS billed me 700% more for RDS https://ift.tt/sRJM4H0

Show HN: Beacon (open source) – Built after AWS billed me 700% more for RDS I've been hosting my side project on AWS. I was paying an okay price for not managing infrastructure at all. I moved everything to AWS Ligthsail after my startup credits run out. The project was initially a success and made several thousand euros per month in revenue. Then came covid with new regulations, and suddenly my customers were non existent (the problem it solved was no longer there). After that it was not making me money, I was paying it from my own pocket to maintain it, thinking maybe it will come back. Then one day, after some ignored spam AWS emails, I got a huge charge on my card, along with a bill from AWS. The charge was orders of magnitude higher than the previous charges."WTF??" I said to myself while rushing to log into the dashboard to see what the issue was. No DDoS, no misconfiguration, nothing unusual. I logged into the root account to look at the billing page, and there it was:RDS PostgreSQL legacy fee ~€200 because I did not upgrade to Postgres 16 (from 13). I was baffled. I paid monthly €25 (27% tax included) for the smallest RDS instance, then I see this monster fee for something I think should cost maybe €2. I mean AWS just has to run it in a different environment. For €200 I could buy them a new server to run it for me. That's when I had the realization: "I have a spare Raspberry Pi 3, I'll just host everything on that. That will be for free." But self-hosting came with it's own challenges, especially on a resource-constrained device. I needed better tools to deploy and monitor my application. SSH-ing into the Raspberry Pi every time I wanted to deploy a newer version was a pain in the ass. So was debugging issues. Existing deployment and monitoring solutions were either too expensive, too complex, or didn't work well with resource-constrained devices like Raspberry Pi. Examples: * Grafana/Prometheus for monitoring: Over-engineered for my needs. * OpenSearch/ELK for logs: A nightmare on low-resource devices. * Metabase for dashboards: A ram hungry monster that eats up more resources than if I hosted 100 applications. And to access the db remotely opening a port and putting it behind Cloudflare Zero Trust is much easier than setting up Metabase. So I decided to build my own deployment and montitoring agent, and why not make it opensource? The agent can currently deploy applications from github by polling release tags, monitor device metrics, alert when thresholds are reached, forward logs to cloud dashboard. It's still in development, with features improving every week. If you are interested, give it a start on Github. https://beaconinfra.dev October 4, 2025 at 01:52AM

Show HN: Was pissed about Google Docs, So I made an Text Editor myself https://ift.tt/VZdU4DR

Show HN: Was pissed about Google Docs, So I made an Text Editor myself It’s been a while since I’ve started to write a book . The process of creation of it has not been easy , first because I’m not a writer , I’ve created well though out internet posts here and there, which ended up creating my first book. It was a good experience , but then I’ve started to think that a book that just gathered my thoughts online it’s not entirely “writing” a book , I needed more. And than I’ve opened google docs and start typing. Then I started to figure out what I wanted to write: should it be a fantasy story, a self-biography, or an observation of the world? I believe most writers have this figured out beforehand, but not me. I began writing pieces to see if they would fit together and make sense. I started gathering philosophical anecdotes based on my core beliefs and sensed something brewing. When I finally decided what the book would be about, and what I wanted to write, the type of writing I wanted to do, I saw an already sizable document with ideas scattered throughout it. That was good for me, as I could just join the pieces, but I didn’t want to be trapped in writing that could be repetitive. I wanted to have the ideas, philosophy, the whole reason why the book is like this, stored in a place I could easily access. I'm planning to use AI as a memory dump, where I can add information during a conversation. Then, whenever I consult it, I can check if I've already written something and if it reflects the temper and pace I want for my book. Everything seems fine, but we encountered a few problems. First, the AI's writing was a conundrum of errors. I could gain assistance and a sense of what to write, but the AI itself, due to our prolonged interchange, started to hallucinate and produce nonsense or "forget" our conversation. The second issue was that the AI couldn't consistently verify what was already written. As the text grew larger, the context window began to shrink, and the more I used the AI tool, the less helpful it became. So I decided to search for a tool that could do what I wanted. I found elements in each of the products I've used: some were extremely satisfying to write with, others had good features to enhance text, some allowed me to organize my book by scattering ideas effectively, and still others used AI for correction and proofreading tasks. The solutions for this market are diverse and offer numerous approaches. I could easily transition between tools, but I wanted something unified to keep my writing process in one place. That’s why I created this text editor and called it SourcePilot. It’s a tool that identifies your writing style as you write, allowing you to add notes, sources, and videos, and to use them as context for the AI, enabling more nuanced outputs tailored to your writing. It was interesting to build, and I’m providing a link you can try. It’s a desktop app, and you can use it for free, depending on the hardware you have. I’m looking for people who could give me feedback on what's wrong with it. People who could not install it (I’ve built it on Mac and could not test Linux and Windows), or have problems logging in. I keep getting loads of problems because I’m using the tool right now as I write this text. I'm planning to launch a new version soon, featuring an anti-slop algorithm I’ve developed, along with document branching. I just want to see if there are people interested in using it at the moment. If there aren't users, that's fine. I think I’ve made something for myself anyway. :) Thank you for your attention if you made it this far. You’re greatly appreciated. Cheers! https://sourcepilot.co/ October 4, 2025 at 01:28AM

Show HN: FLE v0.3 – Claude Code Plays Factorio https://ift.tt/rRZOtnd

Show HN: FLE v0.3 – Claude Code Plays Factorio We're excited to release v0.3.0 of the Factorio Learning Environment (FLE), an open-source environment for evaluating AI agents on long-horizon planning, spatial reasoning, and automation tasks. == What is FLE? == FLE uses the game Factorio to test whether AI can handle complex, open-ended engineering challenges. Agents write Python code to build automated factories, progressing from simple resource extraction (~30 units/min) to sophisticated production chains (millions of units/sec). == What's new in 0.3.0 == - Headless scaling: No longer needs the game client, enabling massive parallelization! - OpenAI Gym compatibility: Standard interface for RL research - Claude Code integration: We're livestreaming Claude playing Factorio [on Twitch]( https://ift.tt/VJ1XEDr ) - Better tooling and SDK: 1-line CLI commands to run evaluations (with W&B logging) == Key findings == We evaluated frontier models (Claude Opus 4.1, GPT-5, Gemini 2.5 Pro, Grok 4) on 24 production automation tasks of increasing complexity. Even the best models struggle: - Most models still rely on semi-manual strategies rather than true automation - Agents rarely define helper functions or abstractions, limiting their ability to scale - Error recovery remains difficult – agents often get stuck in repetitive failure loops The performance gap between models on FLE correlates more closely with real-world task benchmarks (like GDPVal) than with traditional coding/reasoning evals. == Why this matters == Unlike benchmarks based on exams that saturate quickly, Factorio's exponential complexity scaling means there's effectively no performance ceiling. The skills needed - system debugging, constraint satisfaction, logistics optimization - transfer directly to real challenges. == Try it yourself == >>> uv add factorio-learning-environment >>> uv add "factorio-learning-environment[eval]" >>> fle cluster start >>> fle eval --config configs/gym_run_config.json We're looking for researchers, engineers, and modders interested in pushing the boundaries of agent capabilities. Join our Discord if you want to contribute. We look forward to meeting you and seeing what you can build! -- FLE Team https://jackhopkins.github.io/factorio-learning-environment/versions/0.3.0.html October 4, 2025 at 01:02AM

Thursday, October 2, 2025

Show HN: BetterBrain – Dementia prevention, covered by insurance https://ift.tt/UpQ6Pam

Show HN: BetterBrain – Dementia prevention, covered by insurance Hey all! I’ve been building BetterBrain for the past few months, which is the first dementia prevention program entirely covered by insurance. BetterBrain combines expert clinicians, comprehensive testing and state of the art AI - and for many insurance plans is $0. Research shows that dementia can be detected up to 20 years in advance. Despite this, many people at risk of dementia overlook regular brain health assessments. Many members of our founding team have family members affected by neurodegenerative disease. We’re also hiring aggressively if anyone is interested in changing the future of treating neurodegenerative disease. Would love to talk to anyone interested https://ift.tt/9KpFf6u https://ift.tt/9KpFf6u October 3, 2025 at 07:33AM

Show HN: Uber for Flights https://ift.tt/SYxwLE3

Show HN: Uber for Flights My friend and I built BookMyFlight to finally modernize flight search + booking. Why we built this: - Personalization. I fly the same route every month, and there’s no platform that knows my preferences so that I can open it, find and book my flight, and close it within a minute. - Booking is slow. I hate seeing a long clunky airline form each time I need to book. I want booking a flight to feel more like booking an Uber. How it works: 1. Optionally make an account and save your traveler preferences. Personally, I've specified my routine route as SFO to CLE and that I only want red-eye direct flights for this route. 2. Search for flights using chat or the search panel. Chat feels especially time-saving when you have preferences saved (e.g. I just say “search my routine trip"). 3. Once you find the flight you want, use the one-click book feature which books your flight directly with the airline. The first time you book a flight, you’ll have to fill out your traveler info, but you won't see that form after that. Notes: - Your booking is directly with the airline (this means when something goes wrong, you get direct support from the airline—not a third-party) - You can add your rewards numbers for each airline to keep earning points/status The ultimate goal is to create the best possible experience that every traveler wants, but that OTAs and airlines don’t care to create. Also very receptive to hearing pain points from frequent flyers; we think this space is really outdated and could use some innovation. Try it out and let us know what you think :) https://bookmyflight.ai October 3, 2025 at 01:29AM

Show HN: Enhance – A Terminal UI for GitHub Actions https://ift.tt/LvV3546

Show HN: Enhance – A Terminal UI for GitHub Actions I'm very excited to share what I've been working on lately! Introducing ENHANCE, a terminal UI for GitHub Actions that lets you easily see and interact with your PRs checks. It's available under a sponsorware model. Get more info on the site: -> https://ift.tt/P0NGvyK This is an attempt to make my OSS development something sustainable. Happy to hear feedback about the model as well as the tool! Cheers! https://ift.tt/IABamDu October 3, 2025 at 12:49AM

Show HN: Photo AI Editor – Edit, Transform and Enhance Photos with Text Prompt https://ift.tt/hMN64IH

Show HN: Photo AI Editor – Edit, Transform and Enhance Photos with Text Prompt https://ift.tt/6U5kwBW October 2, 2025 at 12:19PM

Wednesday, October 1, 2025

Show HN: Rostra is a P2P (f2f) social network https://ift.tt/URt16um

Show HN: Rostra is a P2P (f2f) social network A public instance is available at https://rostra.me/ . It will default to showing the interface from the perspective of my own identity, in a read-only mode. Click "Logout" and then "Random" to generate your own identity to play with. https://app.radicle.xyz/nodes/radicle.dpc.pw/rad%3AzzK566qFsZnXomX2juRjxj9K1LuF October 2, 2025 at 03:40AM

Show HN: Open-source project – HTTP cache and reverse proxy https://ift.tt/mjT5eEL

Show HN: Open-source project – HTTP cache and reverse proxy https://borislavv.github.io/advcache.dev/ October 1, 2025 at 01:11PM

Show HN: Ocrisp, One-Click RAG Implementation, Simple and Portable https://ift.tt/iehtcM0

Show HN: Ocrisp, One-Click RAG Implementation, Simple and Portable https://ift.tt/sjAngzL October 1, 2025 at 08:23PM

Show HN: PHP-fts – Full-text search engine in pure PHP, no extensions https://ift.tt/wgSBiJP

Show HN: PHP-fts – Full-text search engine in pure PHP, no extensions https://ift.tt/WpBoNzV May 7, 2026 at 01:58AM