This is a autopost bolg frinds we are trying to all latest sports,news,all new update provide for you
Wednesday, January 28, 2026
Show HN: SHDL – A minimal hardware description language built from logic gates https://ift.tt/sWS6hTC
Show HN: SHDL – A minimal hardware description language built from logic gates Hi, everyone! I built SHDL (Simple Hardware Description Language) as an experiment in stripping hardware description down to its absolute fundamentals. In SHDL, there are no arithmetic operators, no implicit bit widths, and no high-level constructs. You build everything explicitly from logic gates and wires, and then compose larger components hierarchically. The goal is not synthesis or performance, but understanding: what digital systems actually look like when abstractions are removed. SHDL is accompanied by PySHDL, a Python interface that lets you load circuits, poke inputs, step the simulation, and observe outputs. Under the hood, SHDL compiles circuits to C for fast execution, but the language itself remains intentionally small and transparent. This is not meant to replace Verilog or VHDL. It’s aimed at: - learning digital logic from first principles - experimenting with HDL and language design - teaching or visualizing how complex hardware emerges from simple gates. I would especially appreciate feedback on: - the language design choices - what feels unnecessarily restrictive vs. educationally valuable - whether this kind of “anti-abstraction” HDL is useful to you. Repo: https://ift.tt/vNp4xne Python package: PySHDL on PyPI To make this concrete, here are a few small working examples written in SHDL: 1. Full Adder component FullAdder(A, B, Cin) -> (Sum, Cout) { x1: XOR; a1: AND; x2: XOR; a2: AND; o1: OR; connect { A -> x1.A; B -> x1.B; A -> a1.A; B -> a1.B; x1.O -> x2.A; Cin -> x2.B; x1.O -> a2.A; Cin -> a2.B; a1.O -> o1.A; a2.O -> o1.B; x2.O -> Sum; o1.O -> Cout; } } 2. 16 bit register # clk must be high for two cycles to store a value component Register16(In[16], clk) -> (Out[16]) { >i[16]{ a1{i}: AND; a2{i}: AND; not1{i}: NOT; nor1{i}: NOR; nor2{i}: NOR; } connect { >i[16]{ # Capture on clk In[{i}] -> a1{i}.A; In[{i}] -> not1{i}.A; not1{i}.O -> a2{i}.A; clk -> a1{i}.B; clk -> a2{i}.B; a1{i}.O -> nor1{i}.A; a2{i}.O -> nor2{i}.A; nor1{i}.O -> nor2{i}.B; nor2{i}.O -> nor1{i}.B; nor2{i}.O -> Out[{i}]; } } } 3. 16-bit Ripple-Carry Adder use fullAdder::{FullAdder}; component Adder16(A[16], B[16], Cin) -> (Sum[16], Cout) { >i[16]{ fa{i}: FullAdder; } connect { A[1] -> fa1.A; B[1] -> fa1.B; Cin -> fa1.Cin; fa1.Sum -> Sum[1]; >i[2,16]{ A[{i}] -> fa{i}.A; B[{i}] -> fa{i}.B; fa{i-1}.Cout -> fa{i}.Cin; fa{i}.Sum -> Sum[{i}]; } fa16.Cout -> Cout; } } https://ift.tt/vNp4xne January 28, 2026 at 05:36PM
Tuesday, January 27, 2026
Show HN: Decrypting the Zodiac Z32 triangulates a 100ft triangular crop mark https://ift.tt/hYiNDUX
Show HN: Decrypting the Zodiac Z32 triangulates a 100ft triangular crop mark https://ift.tt/bKRMYaU January 28, 2026 at 12:42AM
Show HN: Lightbox – Flight recorder for AI agents (record, replay, verify) https://ift.tt/4cJqfry
Show HN: Lightbox – Flight recorder for AI agents (record, replay, verify) I built Lightbox because I kept running into the same problem: an agent would fail in production, and I had no way to know what actually happened. Logs were scattered, the LLM’s “I called the tool” wasn’t trustworthy, and re-running wasn’t deterministic. This week, tons of Clawdbot incidents have driven the point home. Agents with full system access can expose API keys and chat histories. Prompt injection is now a major security concern. When agents can touch your filesystem, execute code, and browse the web…you probably need a tamper-proof record of exactly what actions it took, especially when a malicious prompt or compromised webpage could hijack the agent mid-session. Lightbox is a small Python library that records every tool call an agent makes (inputs, outputs, timing) into an append-only log with cryptographic hashes. You can replay runs with mocked responses, diff executions across versions, and verify the integrity of logs after the fact. Think airplane black box, but for your hackbox. *What it does:* - Records tool calls locally (no cloud, your infra) - Tamper-evident logs (hash chain, verifiable) - Replay failures exactly with recorded responses - CLI to inspect, replay, diff, and verify sessions - Framework-agnostic (works with LangChain, Claude, OpenAI, etc.) *What it doesn’t do:* - Doesn’t replay the LLM itself (just tool calls) - Not a dashboard or analytics platform - Not trying to replace LangSmith/Langfuse (different problem) *Use cases I care about:* - Security forensics: agent behaved strangely, was it prompt injection? Check the trace. - Compliance: “prove what your agent did last Tuesday” - Debugging: reproduce a failure without re-running expensive API calls - Regression testing: diff tool call patterns across agent versions As agents get more capable and more autonomous (Clawdbot/Molt, Claude computer use, Manus, Devin), I think we’ll need black boxes the same way aviation does. This is my attempt at that primitive. It’s early (v0.1), intentionally minimal, MIT licensed. Site: < https://uselightbox.app > install: `pip install lightbox-rec` GitHub: < https://github.com/mainnebula/Lightbox-Project > Would love feedback, especially from anyone thinking about agent security or running autonomous agents in production. https://ift.tt/cT2Ei3W January 27, 2026 at 10:53PM
Monday, January 26, 2026
Show HN: Ourguide – OS wide task guidance system that shows you where to click https://ift.tt/ZkSLnWr
Show HN: Ourguide – OS wide task guidance system that shows you where to click Hey! I'm eshaan and I'm building Ourguide -an on-screen task guidance system that can show you where to click step-by-step when you need help. I started building this because whenever I didn’t know how to do something on my computer, I found myself constantly tabbing between chatbots and the app, pasting screenshots, and asking “what do I do next?” Ourguide solves this with two modes. In Guide mode, the app overlays your screen and highlights the specific element to click next, eliminating the need to leave your current window. There is also Ask mode, which is a vision-integrated chat that captures your screen context—which you can toggle on and off anytime -so you can ask, "How do I fix this error?" without having to explain what "this" is. It’s an Electron app that works OS-wide, is vision-based, and isn't restricted to the browser. Figuring out how to show the user where to click was the hardest part of the process. I originally trained a computer vision model with 2300 screenshots to identify and segment all UI elements on a screen and used a VLM to find the correct icon to highlight. While this worked extremely well—better than SOTA grounding models like UI Tars—the latency was just too high. I'll be making that CV+VLM pipeline OSS soon, but for now, I’ve resorted to a simpler implementation that achieves <1s latency. You may ask: if I can show you where to click, why can't I just click too? While trying to build computer-use agents during my job in Palo Alto, I hit the core limitation of today’s computer-use models where benchmarks hover in the mid-50% range (OSWorld). VLMs often know what to do but not what it looks like; without reliable visual grounding, agents misclick and stall. So, I built computer use—without the "use." It provides the visual grounding of an agent but keeps the human in the loop for the actual execution to prevent misclicks. I personally use it for the AWS Console's "treasure hunt" UI, like creating a public S3 bucket with specific CORS rules. It’s also been surprisingly helpful for non-technical tasks, like navigating obscure settings in Gradescope or Spotify. Ourguide really works for any task when you’re stuck or don't know what to do. You can download and test Ourguide here: https://ourguide.ai/downloads The project is still very early, and I’d love your feedback on where it fails, where you think it worked well, and which specific niches you think Ourguide would be most helpful for. https://ourguide.ai January 26, 2026 at 11:49PM
Show HN: Hybrid Markdown Editing https://ift.tt/zOdClti
Show HN: Hybrid Markdown Editing Shows rendered preview for unfocused lines and raw markdown for the line or block being edited. https://tiagosimoes.github.io/codemirror-markdown-hybrid/ January 27, 2026 at 12:46AM
Show HN: Managed Postgres with native ClickHouse integration https://ift.tt/qQxW34c
Show HN: Managed Postgres with native ClickHouse integration Hello HN, this is Sai and Kaushik from ClickHouse. Today we are launching a Postgres managed service that is natively integrated with ClickHouse. It is built together with Ubicloud (YC W24). TL;DR: NVMe-backed Postgres + built-in CDC into ClickHouse + pg_clickhouse so you can keep your app Postgres-first while running analytics in ClickHouse. Try it (private preview): https://ift.tt/utLrkTZ Blog w/ live demo: https://ift.tt/qoXTySv Problem Across many fast-growing companies using Postgres, performance and scalability commonly emerge as challenges as they grow. This is for both transactional and analytical workloads. On the OLTP side, common issues include slower ingestion (especially updates, upserts), slower vacuums, long-running transactions incurring WAL spikes, among others. In most cases, these problems stem from limited disk IOPS and suboptimal disk latency. Without the need to provision or cap IOPS, Postgres could do far more than it does today. On the analytics side, many limitations stem from the fact that Postgres was designed primarily for OLTP and lacks several features that analytical databases have developed over time, for example vectorized execution, support for a wide variety of ingest formats, etc. We’re increasingly seeing a common pattern where many companies like GitLab, Ramp, Cloudflare etc. complement Postgres with ClickHouse to offload analytics. This architecture enables teams to adopt two purpose-built open-source databases. That said, if you’re running a Postgres based application, adopting ClickHouse isn’t straightforward. You typically end up building a CDC pipeline, handling backfills, and dealing with schema changes and updating your application code to be aware of a second database for analytics. Solution On the OLTP side, we believe that NVMe-based Postgres is the right fit and can drastically improve performance. NVMe storage is physically colocated with compute, enabling significantly lower disk latency and higher IOPS than network-attached storage, which requires a network round trip for disk access. This benefits disk-throttled workloads and can significantly (up to 10x) speed up operations incl. updates, upserts, vacuums, checkpointing, etc. We are working on a detailed blog examining how WAL fsyncs, buffer reads, and checkpoints dominate on slow I/O and are significantly reduced on NVMe. Stay tuned! On the OLAP side, the Postgres service includes native CDC to ClickHouse and unified query capabilities through pg_clickhouse. Today, CDC is powered by ClickPipes/PeerDB under the hood, which is based on logical replication. We are working to make this faster and easier by supporting logical replication v2 for streaming in-progress transactions, a new logical decoding plugin to address existing limitations of logical replication, working toward sub-second replication, and more. Every Postgres comes packaged with the pg_clickhouse extension, which reduces the effort required to add ClickHouse-powered analytics to a Postgres application. It allows you to query ClickHouse directly from Postgres, enabling Postgres for both transactions and analytics. pg_clickhouse supports comprehensive query pushdown for analytics, and we plan to continuously expand this further ( https://ift.tt/THbUV6p ). Vision To sum it up - Our vision is to provide a unified data stack that combines Postgres for transactions with ClickHouse for analytics, giving you best-in-class performance and scalability on an open-source foundation. Get Started We are actively working with users to onboard them to the Postgres service. Since this is a private preview, it is currently free of cost.If you’re interested, please sign up here. https://ift.tt/utLrkTZ We’d love to hear your feedback on our thesis and anything else that comes to mind, it would be super helpful to us as we build this out! January 22, 2026 at 11:51PM
Sunday, January 25, 2026
Show HN: Uv-pack – Pack a uv environment for later portable (offline) install https://ift.tt/r436umD
Show HN: Uv-pack – Pack a uv environment for later portable (offline) install I kept running into the same problem: modern Python tooling, but deployments to air-gapped systems are a pain. Even with uv, moving a fully locked environment into a network-isolated machine was no fun. uv-pack should make this task less frustrating. It bundles a locked uv environment into a single directory that installs fully offline—dependencies, local packages, and optionally a portable Python interpreter. Copy it over, run one script, and you get the exact same environment every time. Just released, would love some feedback! https://ift.tt/3Czo2ac January 26, 2026 at 12:26AM
Show HN: I Created a Tool to Convert YouTube Videos into 2000 Word SEO Blog https://ift.tt/t9u3iKz
Show HN: I Created a Tool to Convert YouTube Videos into 2000 Word SEO Blog https://landkit.pro/youtube-to-blog January 25, 2026 at 11:16PM
Saturday, January 24, 2026
Show HN: Remote workers find your crew https://ift.tt/CwgucMe
Show HN: Remote workers find your crew Working from home? Are you a remote employee that "misses" going to the office? Well let's be clear on what you actually miss. No one misses that feeling of having to go and be there 8 hours. But many people miss friends. They miss being part of a crew. Going to lunch, hearing about other people's lives in person not over zoom. Join a co-working space you say? Yes. We have. It's like walking into a library and trying to talk to random people and getting nothing back. Zero part of a crew feeling. https://ift.tt/5E38K4R This app helps you find a crew and meet up for work and get that crew feeling. This is my first time using cloudflare workers for a webapp. The free plan is amazing! You get so much compare to anything else out there in terms of limits. The sqlite database they give you is just fine, I don't miss psql. January 24, 2026 at 11:54PM
Show HN: Polymcp – Turn Any Python Function into an MCP Tool for AI Agents https://ift.tt/6ghNF93
Show HN: Polymcp – Turn Any Python Function into an MCP Tool for AI Agents I built Polymcp, a framework that allows you to transform any Python function into an MCP (Model Context Protocol) tool ready to be used by AI agents. No rewriting, no complex integrations. Examples Simple function: from polymcp.polymcp_toolkit import expose_tools_http def add(a: int, b: int) -> int: """Add two numbers""" return a + b app = expose_tools_http([add], title="Math Tools") Run with: uvicorn server_mcp:app --reload Now add is exposed via MCP and can be called directly by AI agents. API function: import requests from polymcp.polymcp_toolkit import expose_tools_http def get_weather(city: str): """Return current weather data for a city""" response = requests.get(f" https://ift.tt/PqVOWDT ") return response.json() app = expose_tools_http([get_weather], title="Weather Tools") AI agents can call get_weather("London") to get real-time weather data instantly. Business workflow function: import pandas as pd from polymcp.polymcp_toolkit import expose_tools_http def calculate_commissions(sales_data: list[dict]): """Calculate sales commissions from sales data""" df = pd.DataFrame(sales_data) df["commission"] = df["sales_amount"] * 0.05 return df.to_dict(orient="records") app = expose_tools_http([calculate_commissions], title="Business Tools") AI agents can now generate commission reports automatically. Why it matters for companies • Reuse existing code immediately: legacy scripts, internal libraries, APIs. • Automate complex workflows: AI can orchestrate multiple tools reliably. • Plug-and-play: multiple Python functions exposed on the same MCP server. • Reduce development time: no custom wrappers or middleware needed. • Built-in reliability: input/output validation and error handling included. Polymcp makes Python functions immediately usable by AI agents, standardizing integration across enterprise software. Repo: https://ift.tt/1HobY6U January 25, 2026 at 12:57AM
Friday, January 23, 2026
Show HN: Obsidian Workflows with Gemini: Inbox Processing and Task Review https://ift.tt/mghlVMN
Show HN: Obsidian Workflows with Gemini: Inbox Processing and Task Review https://gist.github.com/juanpabloaj/59bc13fbed8a0f8e87791a3fb0360c19 January 24, 2026 at 12:03AM
Show HN: Teemux – Zero-config log multiplexer with built-in MCP server https://ift.tt/Jz21iMS
Show HN: Teemux – Zero-config log multiplexer with built-in MCP server I started to use AI agents for coding and quickly ran into a frustrating limitation – there is no easy way to share my development environment logs with AI agents. So that's what is Teemux. A simple CLI program that aggregates logs, makes them available to you as a developer (in a pretty UI), and makes them available to your AI coding agents using MCP. There is one implementation detail that I geek out about: It is zero config and has built-in leader nomination for running the web server and MCP server. When you start one `teemux` instance, it starts web server, .. when you start second and third instances, they join the first server and start merging logs. If you were to kill the first instance, a new leader is nominated. This design allows to seamless add/remove nodes that share logs (a process that historically would have taken a central log aggregator). A super quick demo: npx teemux -- curl -N https://ift.tt/sKhBTMp https://teemux.com/ January 23, 2026 at 09:19PM
Thursday, January 22, 2026
Show HN: Synesthesia, make noise music with a colorpicker https://ift.tt/7lLdNV2
Show HN: Synesthesia, make noise music with a colorpicker This is a (silly, little) app which lets you make noise music using a color picker as an instrument. When you click on a specific point in the color picker, a bit of JavaScript maps the binary representation of the clicked-on color's hex-code to a "chord" in the 24 tone-equal-temperament scale. That chord is then played back using a throttled audio generation method which was implemented via Tone.js. NOTE! Turn the volume way down before using the site. It is noise music. :) https://visualnoise.ca January 22, 2026 at 11:22AM
Wednesday, January 21, 2026
Show HN: Mirage – Experimental Java obf using reflection to break direct calls https://ift.tt/rIO3vb2
Show HN: Mirage – Experimental Java obf using reflection to break direct calls Replaces method calls and field accesses with reflection equivalents → makes static analysis and decompilers much less useful. Experimental, performance hit expected. https://ift.tt/DUmiYeX January 21, 2026 at 10:22PM
Show HN: I built a chess explorer that explains strategy instead of just stats https://ift.tt/vP9nJuG
Show HN: I built a chess explorer that explains strategy instead of just stats I built this because I got tired of Stockfish giving me evaluations (+0.5) without explaining the actual plan. Most opening explorers focus on statistics (Win/Loss/Draw). I wanted a tool that explains the strategic intent behind the moves (e.g., "White plays c4 to clamp down on d5" vs just "White plays c4"). The Project: Comprehensive Database: I’ve mapped and annotated over 3,500 named opening variations. It covers everything from main lines (Ruy Lopez, Sicilian) to deep sidelines. Strategic Visualization: The UI highlights key squares and draws arrows based on the textual explanation, linking the logic to the board state dynamically. Hybrid Architecture: For the 3,500+ core lines, it serves my proprietary strategic data. For anything deeper/rarer, it seamlessly falls back to the Lichess Master API so the explorer remains functional 20 moves deep. Stack: Next.js (App Router), MongoDB Atlas for the graph data, and Arcjet for security/rate-limiting. It is currently in Beta. I am working on expanding the annotated coverage, but the main theoretical landscape is mapped. Feedback on the UI/UX or the data structure is welcome. https://ift.tt/Fj4UeMQ January 21, 2026 at 09:26PM
Tuesday, January 20, 2026
Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs https://ift.tt/ek1Qfr8
Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs Hi HN, we're Sam, Shane, and Abhi. Almost a year ago, we first shared Mastra here ( https://ift.tt/AEfw5qe ). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback. Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed. If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability. Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity. Agent development is changing quickly, so we’ve added a lot since February: - Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks. - Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part. - Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage. - Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server. (That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.) Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`. We'll be around and happy to answer any questions! https://ift.tt/Z5HzFTi January 20, 2026 at 10:08PM
Show HN: Typing Tennis https://ift.tt/GmzbUVp
Show HN: Typing Tennis Hey HN, Here’s a quick weekend project: tennis, but played by typing. Try it out! https://ift.tt/ovecFiz January 20, 2026 at 11:36PM
Monday, January 19, 2026
Show HN: An interactive physics simulator with 1000's of balls, in your terminal https://ift.tt/8WXNOUJ
Show HN: An interactive physics simulator with 1000's of balls, in your terminal https://ift.tt/nejiUMw January 19, 2026 at 11:17PM
Show HN: Subth.ink – write something and see how many others wrote the same https://ift.tt/nqW6hPS
Show HN: Subth.ink – write something and see how many others wrote the same Hey HN, this is a small Haskell learning project that I wanted to share. It's just a website where you can see how many people write the exact same text as you (thought it was a fun idea). It's built using Scotty, SQLite, Redis and Caddy. Currently it's running in a small DigitalOcean droplet (1 Gb RAM). Using Haskell for web development (specifically with Scotty) was slightly easier than I thought, but still a relatively hard task compared to other languages. One of my main friction points was Haskell's multiple string-like types: String, Text (& lazy), ByteString (& lazy), and each library choosing to consume a different one amongst these. There is also a soft requirement to learn monad transformers (e.g. to understand what liftIO is doing) which made the initial development more difficult. https://subth.ink/ January 20, 2026 at 12:04AM
Sunday, January 18, 2026
Show HN: Xenia – A monospaced font built with a custom Python engine https://ift.tt/ZFeIj5n
Show HN: Xenia – A monospaced font built with a custom Python engine I'm an engineer who spent the last year fixing everything I hated about monofonts (especially that double-story 'a'). I built a custom Python-based procedural engine to generate the weights because I wanted more logical control over the geometry. It currently has 700+ glyphs and deep math support. Regular weight is free for the community. I'm releasing more weights based on interest. https://ift.tt/S7hcmwd January 18, 2026 at 04:09PM
Subscribe to:
Comments (Atom)
Show HN: Interactive California Budget (By Claude Code) https://ift.tt/K7jLM2I
Show HN: Interactive California Budget (By Claude Code) There's been a lot of discussion around the california budget and some proposed ...
-
Show HN: A directory of 800 free APIs, no auth required Explore reliable free APIs for developers — ideal for web and software development, ...
-
Show HN: I built a FOSS tool to run your Steam games in the Cloud I wanted to play my Steam games but my aging PC couldn’t keep up, so I bui...
-
Show HN: Bookmark and organise your mobile links with ease with this free app https://ift.tt/V6uPZFHShow HN: Bookmark and organise your mobile links with ease with this free app Do you have lists scattered all over your phone? Are you tired...