Friday, October 10, 2025

Show HN: Praxos – Webhooks for Your Life https://ift.tt/1Rcakjn

Show HN: Praxos – Webhooks for Your Life Hello HN, Lucas and Soheil here from Praxos ( https://mypraxos.com/ )! We’ve been working on an AI personal assistant for a while now, and today we're sharing about our new webhooks feature with you. You can now add webhooks and triggers by asking Praxos over text or voice to set them up for you. Webhooks listen for conditions that, when they happen, trigger another action. Webhooks can execute one time or be indefinite. They can execute any action currently supported by Praxos. They are implemented for email and calendar (Gmail, Outlook), Notion, Slack, Discord, Trello, Dropbox, Drive, iMessage, Whatsapp and Telegram. Examples include: –"When a new task is added to my Trello ‘Urgent’ list, create a two-hour block on my Google Calendar within my next work window, and send me a reminder as soon as it happens. " [this ties nicely to the next one] –"If my calendar says I’m in a focus block, auto-reply on Slack saying I’m working on the latest Trello Urgent task." –“When a meeting transcript is ready from Fireflies and lands on my email, extract decisions and next steps, then post a 5-bullet summary in #team-updates on Slack” [ties to next point]. –“When a meeting transcript from work finishes processing, summarize it, and post key decisions to Slack. But delay the notification until after my kid’s bedtime. If the summary includes tasks due tomorrow, block off my calendar in the morning and text me a reminder after I wake up.” –"My goal is to read 12 books this year. Every month, send me a list of 10 books and their links to Goodreads based on what I like. Every Saturday morning, ask me how far along I am with my reading. If you find I purchased a book (i.e.: from checking my email) then add it to my reading list, and ask me if I've started reading it." –"Every time I get a receipt from Uber Eats or Doordash, extract the date, bill, and meal and add those to my personal finances spreadsheet on Google Sheets." –“When a new transaction appears in my bank email or statement, extract the merchant, category, and amount, then log it in my ‘Spending Tracker’ Google Sheet. If total monthly spend crosses my budget limit, post a summary to my private WhatsApp chat with the top three categories that caused it.” –"Remind me to pay my credit card and bills each time a new email comes in. Ask me if I have done so one date later if I don't tell you I already have done this." –“At month-end, compile all financial logs from Sheets, receipts, and transcripts of my finance calls, then generate a single ‘Monthly Snapshot’ PDF. If my savings rate improved, add a green badge, and congratulate me. If it worsened, send me a summary with trends.” –"Every Sunday, check my latest additions to Google Photos and send a curation to my mom and grandma over Whatsapp." –"Review user feedback on the Praxos Discord channel and add it to our User Feedback page on Notion. Keep a counter for repeat requests." –"If I receive an email from Lucas, and I'm coding, respond to him and tell him I'm busy. Also remind him to check what I'm up to on Trello." Curious? Try it out for free for 7 days at https://ift.tt/JxMi5aj ! October 11, 2025 at 12:00AM

Show HN: Multiple choice video webgame experiment https://ift.tt/VezWH4T

Show HN: Multiple choice video webgame experiment Hey all, just wanted to share a little game experiment. It's a rooms & keys kind of adventure with a lot of random deaths. It plays a Veo3-generated video in response to clicks, with Gemini used for coding. Prompting the videos was fun, but trying to vibe code everything was not. In the future I'll go back to using LLMs more sparingly for isolated functions, or at least try not to have it create anything that requires seeing the output to debug. https://ift.tt/sVFEQHh October 10, 2025 at 10:39PM

Show HN: Iframetest.com https://ift.tt/P4iOsvX

Show HN: Iframetest.com https://iframetest.com/ October 6, 2025 at 03:25PM

Thursday, October 9, 2025

Show HN: Open-Source Voice AI Badge Powered by ESP32+WebRTC https://ift.tt/iL7u2CW

Show HN: Open-Source Voice AI Badge Powered by ESP32+WebRTC hi! video[0] The idea is you could carry around this hardware and ask it any questions about the conference. Who is speaking, what are they speaking about etc... it connects via WebRTC to a LLM and you get a bunch of info. This is a workshop/demo project I did for a conference. When I was talking to the organizers I mentioned that I enjoy doing hardware + WebRTC projects. They thought that was cool and so we ran with it. I have been doing these ESP32 + voice ai projects for a bit now. Started with an embedded sdk for livekit[1] that jul 2024 and been noodling with it since then. This code then found its way into pipecat/livekit etc... So I hope it inspires you to go build with hardware and webrtc. It's a REALLY fun space right now. Lots of different cheap microcontrollers and even more cool projects. [0] https://www.youtube.com/watch?v=gPuNpaL9ig8 [1] https://ift.tt/VJARHqK https://ift.tt/pnzcEyu October 10, 2025 at 02:25AM

Show HN: Created macOS app to help you keep your distance from your screen https://ift.tt/RITlsb0

Show HN: Created macOS app to help you keep your distance from your screen Hey everyone, If you're anything like me, you spend a good chunk of your day (and night) on your computer. I often find that when I'm zoned in, my posture gets worse and worse and my face ends up being really close to the screen. And over a course of a workday, when I finally unplug, my eyes have a hard time focusing on things that are far away. This has become a big enough problem for me that I decided to create an app to help me keep my face far enough from the screen. Now, I could've gone with a simple notification with a timer built into it but, as with all reminder notification, they soon become noise for me and I end up just dismissing it. I needed something to actively force me to move back. Which is where FarSight comes in. It uses your camera to gauge your distance and blurs the entire screen if it detects that you are getting close enough for a certain period of time. I made it so that it won't be extremely annoying and disruptive (e.g. blurring the screen every time you cross the line) but just enough of a nuisance to be helpful. I've been using it everyday since creating it and it's definitely helped me with eye strain, double vision, and surprisingly, my posture as well. I'm not sure if I'll keep it free forever but I wanted to release it first to ask for feedback. I only have the app in MacOS so if it has enough interest, I'll invest into making Windows counterpart. https://ift.tt/45fepBr... Also, in case anyone is wondering, no data is collected and the snapshots during the app's usage are not saved but only used to calculate the distance. October 10, 2025 at 01:57AM

Wednesday, October 8, 2025

Show HN: Spica – OSS Tool to Generate Infinite Length Sora-2 Videos https://ift.tt/khVWpoz

Show HN: Spica – OSS Tool to Generate Infinite Length Sora-2 Videos https://ift.tt/O7pmuyS October 9, 2025 at 12:04AM

Show HN: KI Song Erstellen Kostenlos – AI Music Generator FüR Deutsche Musik https://ift.tt/BVtwqv4

Show HN: KI Song Erstellen Kostenlos – AI Music Generator FüR Deutsche Musik Kostenloser KI-Musikgenerator für deutsche Songs. Text rein → professioneller Song in wenigen Minuten. Gebaut für Content Creator, die Copyright-freie Musik brauchen. https://ift.tt/UT7QzBO GitHub: https://ift.tt/bgTrk19 Probiert es aus! https://ift.tt/UT7QzBO October 8, 2025 at 10:26PM

Tuesday, October 7, 2025

Show HN: Agentic Design Patterns – Python Edition, from the Codex Codebase https://ift.tt/hIUK7va

Show HN: Agentic Design Patterns – Python Edition, from the Codex Codebase While reading Agentic Design Patterns by Antonio Gulli, I wanted to see how these patterns look in real code. I cloned the OpenAI Codex repo (the open-source AI coding assistant that recently trended on HN) — but it was in Rust. So, I used an Cursor to help me extract and translate 18+ agentic patterns from Codex’s codebase into Python. That small experiment turned into a full open-source guide: GitHub: Codex Agentic Patterns https://ift.tt/9upZHC7 Each pattern comes with: A short explanation and code sample A runnable exercise and agent snippet A summary of how Codex used the pattern (e.g., prompt chaining, tool orchestration, reflection loops, sandbox escalation) One full working Python agent that ties it all together If you’ve read the agentic design patterns book or explored Codex, this is a bridge between theory and practice — focused on runnable, open examples instead of abstract diagrams. It’s completely free and open-source. Would love feedback, ideas, or even new patterns from your own agent experiments. https://artvandelay.github.io/codex-agentic-patterns/ October 8, 2025 at 04:11AM

Show HN: DidMySettingsChange – A tool that checks changed windows settings https://ift.tt/t6SViur

Show HN: DidMySettingsChange – A tool that checks changed windows settings Microsoft has been under heavy scrutiny with how they manage Windows over the years, particularly concerning privacy and telemetry settings. Many users find that after disabling certain settings, these settings are mysteriously re-enabled after updates or without any apparent reason. DidMySettingsChange is a Python script designed to help users keep track of their Windows privacy and telemetry settings, ensuring that they stay in control of their privacy without the hassle of manually checking each setting. Features Comprehensive Checks: Automatically scans all known Windows privacy and telemetry settings. Change Detection: Alerts users if any settings have been changed from their preferred state. Customizable Configuration: Allows users to specify which settings to monitor. Easy to Use: Simple command-line interface that provides clear and concise output. Logs and Reports: Generates detailed logs and reports for auditing and troubleshooting. https://ift.tt/3bKGAHD October 6, 2025 at 04:19AM

Show HN: I'm building a browser for reverse engineers https://ift.tt/tyQYdjA

Show HN: I'm building a browser for reverse engineers https://ift.tt/b2yRU5C October 6, 2025 at 09:02PM

Show HN: Gotask, a simple task manager CLI built using Golang https://ift.tt/ZA9jVzt

Show HN: Gotask, a simple task manager CLI built using Golang Hey folks, Gotask is a simple golang CLI I made to explore some aspects of the Go programming language. https://ift.tt/lgN6uwI October 8, 2025 at 12:20AM

Monday, October 6, 2025

Show HN: TinqerJS– LINQ-inspired QueryBuilder for TypeScript + Postgres/SQLite https://ift.tt/7i8JCWk

Show HN: TinqerJS– LINQ-inspired QueryBuilder for TypeScript + Postgres/SQLite https://tinqerjs.org October 6, 2025 at 08:58PM

Show HN: I've build a platform for writing technical/scientific documents https://ift.tt/hucG4KP

Show HN: I've build a platform for writing technical/scientific documents https://ift.tt/0beoYHD October 6, 2025 at 04:28PM

Show HN: I Built a Transcription CLI Because Uploading 4GB Videos Was Killing Me https://ift.tt/hAY9QL2

Show HN: I Built a Transcription CLI Because Uploading 4GB Videos Was Killing Me https://ift.tt/bRZSUvg October 6, 2025 at 11:52PM

Show HN: Volant– spin up real microVMs in 10 seconds(Docker images or initramfs) https://ift.tt/EgG5ATX

Show HN: Volant– spin up real microVMs in 10 seconds(Docker images or initramfs) I’ve been building Volant, a modular microVM orchestration engine that makes running microVMs feel as simple as Docker. It supports cloud-init, GPU/VFIO passthrough (yes, you can run AI/ML workloads in isolated microVMs), booting Docker images via a plugin system, and Kubernetes-style deployments with replication, all from a single CLI(soon to be web UI, see next) Coming soon: a built-in PaaS mode with snapshot-based cold start elimination, sort of like Dokploy, but designed for serverless workloads that boot from memory snapshots instead of containers. Volant is intentionally a bit opinionated to make microVMs more accessible, but it’s fully extensible for power users. Check out the README and the docs for more details. It’s free and open source (under BSL), would love to hear feedback or thoughts from anyone! tl;dr: 6-second GIF in the README shows the full flow: install → create VM → get HTTP 200. https://ift.tt/p3AQNum October 6, 2025 at 04:24AM

Sunday, October 5, 2025

Show HN: A Node.js CLI tool to generate ai.txt, llms.txt, robots.txt, humans.txt https://ift.tt/OfDnGeR

Show HN: A Node.js CLI tool to generate ai.txt, llms.txt, robots.txt, humans.txt https://ift.tt/oNZKUrc October 6, 2025 at 09:28AM

Show HN: High-fidelity, compact, and real time rendering of university campus https://ift.tt/ExjboKt

Show HN: High-fidelity, compact, and real time rendering of university campus Technical thread: https://ift.tt/X8UBZ4n https://hoanh.space/aalto/ October 6, 2025 at 05:21AM

Saturday, October 4, 2025

Show HN: An open-source, RL-native observability framework we've been missing https://ift.tt/ietSHwr

Show HN: An open-source, RL-native observability framework we've been missing The RL ecosystem is maturing— verifiers are standardizing how we build and share environments. However, as it grows, we need observability tooling that actually understands RL primitives. Running RL experiments without visibility into rollout quality, reward distributions, or failure modes is a waste of time. Monitor provides live tracking, per-example inspection, and programmatic access—see what's happening during runs and debug what went wrong afterward. https://ift.tt/0Lz1VIO October 5, 2025 at 03:05AM

Show HN: World Amazing Framework: Like Django for Civilization https://ift.tt/cBZgEuj

Show HN: World Amazing Framework: Like Django for Civilization Any initial thoughts? This framework is meant to be a tool for construction, so if you want to play around with it for creating potential specific implementations, you can drop the contents of the website, the GitHub README, and the entire overview.md into an AI chat, and that should be enough to use the framework, at least conceptually. Would y'all want me to pre-prime a chat in Google AI Studio with the full context of the plan and some basic direction for discourse? I can share a link to a ready-to-go environment. The core documentation should answer most mechanical questions. And if you feed the docs into an AI chat, you can ask it any question you may have, or to simply ask it to explain something in different ways, or hypothesize solutions to any world issue, either systemic or regional. Gemini Pro 2.5 can take the full doc in one prompt, and its ability to co-create ideas is remarkable. I've been using it mostly through the AI Studio interface. Much of the overview is as much my work as it is a synthesis of my collaboration with Gemini Pro 2.5, ChatGPT-4o, and some early contributions from GPT-4 about a year ago. Before LLMs, I was building out pamphlet-style pages on a website (that are up at whomanatee.org, which is the base wrapper implementation of the framework), and I was planning to use them as talking points. I was anticipating that much of the deep thinking would have to happen in slow, public discourse. With LLMs, I've been able to stress-test these ideas from every possible angle, using any past event or theory to see if the framework could withstand scrutiny. At one point, a model argued that Adam Smith would have rejected this idea as fantasy. So I worked with it to develop an economic plan that "synthetic Adam" praised. It's incredible that we now have the ability to get synthesized thoughts from almost any perspective. You could ask it, "What would Barack Obama think of this plan? And using the framework, what would be your response to any hesitations he may have?" And it responds with incredible analysis, synthesis, and feedback. https://ift.tt/8QzGKWh October 5, 2025 at 03:44AM

Show HN: Run – a CLI universal code runner I built while learning Rust https://ift.tt/sM0NdnZ

Show HN: Run – a CLI universal code runner I built while learning Rust Hi HN — I’m learning Rust and decided to build a universal CLI for running code in many languages. The tool, Run, aims to be a single, minimal dependency utility for: running one-off snippets (from CLI flags), running files, reading and executing piped stdin, and providing language-specific REPLs that you can switch between interactively. I designed it to support both interpreted languages (Python, JS, Ruby, etc.) and compiled languages (Rust, Go, C/C++). It detects languages from flags or file extensions, can compile temporary files for compiled languages, and exposes a unified REPL experience with commands like :help, :lang, and :quit. Install: cargo install run-kit (or use the platform downloads on GitHub). Source & releases: https://ift.tt/TDCoZ2l I used Rust while following the official learning resources and used AI to speed up development, so I expect there are bugs and rough edges. I’d love feedback on: usability and UX of the REPL, edge cases for piping input to language runtimes, security considerations (sandboxing/resource limits), packaging and cross-platform distribution. Thanks — I’ll try to answer questions and share design notes. https://ift.tt/TDCoZ2l October 5, 2025 at 12:04AM

Show HN: Kstack – Skill pack for monitoring/troubleshooting K8s in Claude Code https://ift.tt/GQauRgE

Show HN: Kstack – Skill pack for monitoring/troubleshooting K8s in Claude Code Hi All, Recently I've been using Claude Code a lot for de...