This is a autopost bolg frinds we are trying to all latest sports,news,all new update provide for you
Tuesday, July 23, 2024
Show HN: Formula 1, 2, 3 and E Weather updates https://ift.tt/iUszQ0m
Show HN: Formula 1, 2, 3 and E Weather updates Get up to date predicted weather for the upcoming Formula 1, 2, 3 and E sessions! https://ift.tt/auNWeph July 23, 2024 at 11:57PM
Monday, July 22, 2024
Show HN: Easily map CSV data with lat/lon to H3 for enrichment or aggregation https://ift.tt/hdyGXOk
Show HN: Easily map CSV data with lat/lon to H3 for enrichment or aggregation https://ift.tt/6N2AJmn July 23, 2024 at 04:08AM
Show HN: I made a task manager / calendar app to use it with my wife https://ift.tt/c29EIsi
Show HN: I made a task manager / calendar app to use it with my wife Hi HN, Two years ago, I started building Jinear as a side project for fun. Then my wife started using it to plan her PhD thesis. I added features as we needed them. After a while, my best friend started using it in his small company, so I thought maybe I could productize it. You can create tasks, set reminders, attach files, link your Google Calendar, etc. It can be used as a PWA. We've been using it daily. I would be happy to receive your feedback. Thanks. https://jinear.co July 22, 2024 at 10:04PM
Show HN: I packaged all of the productivity advice into one product https://ift.tt/K1IFziv
Show HN: I packaged all of the productivity advice into one product Hey hackers, Like many of you, I'm always trying to optimize my productivity, and I have tried a lot of apps out there to do so. I didn't find the exact one that I could use, so I built one for myself. Check it out and let me know if you have any thoughts! https://www.focusmax.io July 22, 2024 at 08:54PM
Sunday, July 21, 2024
Show HN: Shade/Bs – Modern Web UIs Without Node.js https://ift.tt/skNq37b
Show HN: Shade/Bs – Modern Web UIs Without Node.js https://ift.tt/YX1lJ8s July 22, 2024 at 04:58AM
Show HN: Create how-to videos and guides fast https://ift.tt/UXI5GeY
Show HN: Create how-to videos and guides fast Hey HN, I'm Kamal, a 20 year old builder from India, with a team of 4. I was building an AI-powered course builder initially, and once we started getting some paying customers biggest problem that we encountered was teaching them how to use the product. I wanted to create help centre but I couldn't, probably the procrastination, because I felt like it was a huge task. So, me and my co-founder decided to make creating videos and guides for this help centre fast and easy. This is where Kroto comes in, you just record a product/process walkthrough and it generates studio quality how-to videos with zoom-in and transition effects along with a step-by-step guide with GIFs for every action. Here's the demo: https://www.youtube.com/watch?v=JmeeNmpNepY I want to know if you guys also face similar issues, and want to get some feedback on the product. Biggest issues right now: Publishing time way too long, editor not optimised, no way to remove or add more zoom-ins in the video. https://www.kroto.one July 21, 2024 at 10:13PM
Saturday, July 20, 2024
Show HN: Local Devin – powered by Sonnet 3.5 https://ift.tt/RgkWZj2
Show HN: Local Devin – powered by Sonnet 3.5 https://ift.tt/ZvGYNEg July 21, 2024 at 03:24AM
Show HN: Live Demo of GraphRAG with GPT-4o mini https://ift.tt/TX1cY8P
Show HN: Live Demo of GraphRAG with GPT-4o mini Hi HN, Microsoft recently open-sourced the GraphRAG framework, which enables more contextual responses than traditional vector-based RAG, especially for summarization-focused queries on textual data. However, a common critique is the LLM costs for constructing the knowledge graph. With the newly released GPT-4o mini, working with GraphRAG would now be ~30x cheaper. We built a demo with quarterly earning call transcripts from a few S&P 100 companies comparing GraphRAG with GPT-4o, GraphRAG with GPT-4o mini, and Baseline RAG. Try out the demo here: https://ift.tt/sWAJ3Nq Looking forward to your feedback! https://ift.tt/sWAJ3Nq July 21, 2024 at 12:11AM
Friday, July 19, 2024
Show HN: Mistral NeMo finetuning fits in Colab https://ift.tt/qnT1iN0
Show HN: Mistral NeMo finetuning fits in Colab Managed to make Mistral NeMO 12b https://ift.tt/NE8UH3M fit in a free Google Colab with a Tesla T4 GPU (16GB) for 4bit QLoRA finetuning! Managed to shave 60% VRAM usage and made it 2x faster as well! It should work in under 12GB of VRAM as well! https://ift.tt/MjWARa8 July 19, 2024 at 10:06PM
Show HN: AI Negotiation roleplays for training and fun https://ift.tt/pvAjRsV
Show HN: AI Negotiation roleplays for training and fun Dear Hackers, for a friend of mine I recently developed a prototype that allows him to let the job candidates for sales positions prove their negotiation skills. I now turned it into a public demo, to get your feedback. First situations: House Negotiation is live, and Hostage Takers follows. Have fun and tell me what you think. https://ift.tt/0cVYXK9 July 20, 2024 at 02:46AM
Show HN: Spectral – Visualize, explore, and share code in Python/JS/TS https://ift.tt/Spwa2RF
Show HN: Spectral – Visualize, explore, and share code in Python/JS/TS https://ift.tt/LtWjDXh July 20, 2024 at 01:22AM
Show HN: Building a Next.js and Firebase boilerplate to save 80% of my time https://ift.tt/IqlRYoQ
Show HN: Building a Next.js and Firebase boilerplate to save 80% of my time https://ift.tt/r9M1jky July 19, 2024 at 02:02PM
Thursday, July 18, 2024
Show HN: ChatGPT Chrome Extension to Keep Temporary Chat Enabled https://ift.tt/1V9xkqH
Show HN: ChatGPT Chrome Extension to Keep Temporary Chat Enabled https://ift.tt/3cM6HVW July 19, 2024 at 09:35AM
Show HN: NetSour, CLI Based Wireshark https://ift.tt/q1IeyWB
Show HN: NetSour, CLI Based Wireshark This code is still in early beta, but i sincerley hope it will become as ubiquitous as VIM on Linux. https://ift.tt/DNmTce7 July 19, 2024 at 07:47AM
Show HN: How we leapfrogged traditional vector based RAG with a 'language map' https://ift.tt/TbKksIf
Show HN: How we leapfrogged traditional vector based RAG with a 'language map' TL;DR: Vector-based RAG performs poorly for many real-world applications like codebase chats, and you should consider 'language maps'. Part of our mission at Mutable.ai is to make it much easier for developers to build and understand software. One of the natural ways to do this is to create a codebase chat, that answer questions about your repo and help you build features. It might seem simple to plug in your codebase into a state-of-the-art LLM, but LLMs have two limitations that make human-level assistance with code difficult: 1. They currently have context windows that are too small to accommodate most codebases, let alone your entire organization's codebases. 2. They need to reason immediately to answer any questions without thinking through the answer "step-by-step." We built a chat sometime a year ago based on keyword retrieval and vector embeddings. No matter how hard we tried, including training our own dedicated embedding model, we could not get the chat to get us good performance. Here is a typical example: https://ift.tt/NFS8QnH... If you ask how to do quantization in llama.cpp the answers were oddly specific and seemed to pull in the wrong context consistently, especially from tests. We could, of course, take countermeasures, but it felt like a losing battle. So we went back to step 1, let’s understand the code, let’s do our homework, and for us, that meant actually putting an understanding of the codebase down in a document — a Wikipedia-style article — called Auto Wiki. The wiki features diagrams and citations to your codebase. Example: https://ift.tt/4lo8NYC This wiki is useful in and of itself for onboarding and understanding the business logic of a codebase, but one of the hopes for constructing such a document was that we’d be able to circumvent traditional keyword and vector-based RAG approaches. It turns out using a wiki to find context for an LLM overcomes many of the weaknesses of our previous approach, while still scaling to arbitrarily large codebases: 1. Instead of context retrieval through vectors or keywords, the context is retrieved by looking at the sources that the wiki cites. 2. The answers are based both on the section(s) of the wiki that are relevant AND the content of the actual code that we put into memory — this functions as a “language map” of the codebase. See it in action below for the same query as our old codebase chat: https://ift.tt/NFS8QnH... https://ift.tt/NFS8QnH... The answer cites it sources in both the wiki and the actual code and gives a step by step guide to doing quantization with example code. The quality of the answer is dramatically improved - it is more accurate, relevant, and comprehensive. It turns out language models love being given language and not a bunch of text snippets that are nearby in vector space or that have certain keywords! We find strong performance consistently across codebases of all sizes. The results from the chat are so good they even surprised us a little bit - you should check it out on a codebase of your own, at https://wiki.mutable.ai , which we are happy to do for free for open source code, and starts at just $2/mo/repo for private repos. We are introducing evals demonstrating how much better our chat is with this approach, but were so happy with the results we wanted to share with the whole community. Thank you! https://twitter.com/mutableai/status/1813815706783490055 July 19, 2024 at 12:10AM
Wednesday, July 17, 2024
Show HN: How we use LLMs to find testing gaps, vulnerabilities in codebases https://ift.tt/ZBbIm2H
Show HN: How we use LLMs to find testing gaps, vulnerabilities in codebases Hello everyone! I’m thrilled to announce the latest feature from Mutahunter.ai, the ultimate tool for finding and fixing weaknesses in your code. We’ve designed Mutahunter to leverage mutation testing powered by advanced LLMs, helping you uncover vulnerabilities and enhance your code quality effortlessly. Introducing our newest feature: Detailed Mutation Testing Reports! After running our mutation tests, Mutahunter now generates comprehensive reports that clearly summarize: • Vulnerable code gaps • Test case gaps These reports significantly reduce the cognitive load on developers by providing an easy-to-read summary of critical insights, enabling you to focus on what matters most—improving your code. We are proud to be completely open-source, and we invite you to check us out on GitHub: https://ift.tt/M3yFVYJ https://ift.tt/2fT0BGn July 18, 2024 at 02:19AM
Show HN: SQLite Transaction Benchmarking Tool https://ift.tt/HhpBu4o
Show HN: SQLite Transaction Benchmarking Tool I wanted to make my own evaluation of what kind of performance I could expect from SQLite on a server and investigate the experimental `BEGIN CONCURRENT` branch vs the inbuilt `DEFERRED` and `IMMEDIATE` behaviors. Explanatory blog post: https://ift.tt/EXipu8l https://ift.tt/vUQq6Z2 July 18, 2024 at 03:14AM
Show HN: Blitzping – A far faster nping/hping3 SYN-flood alternative with CIDR https://ift.tt/owahKpH
Show HN: Blitzping – A far faster nping/hping3 SYN-flood alternative with CIDR I found hping3 and nmap's nping to be far too slow in terms of sending individual, bare-minimum (40-byte) TCP SYN packets; other than inefficient socket I/O, they were also attempting to do far too much unnecessary processing in what should have otherwise been a tight execution loop. Furthermore, none of them were able to handle CIDR notations (i.e., a range of IP addresses) as their source IP parameter. Being intended for embedded devices (e.g., low-power MIPS/Arm-based routers), Blitzping only depends on standard POSIX headers and C11's libc (whether musl or gnu). To that end, even when supporting CIDR prefixes, Blitzping is significantly faster compared to hping3, nping, and whatever else that was hosted on GitHub. Here are some of the performance optimizations specifically done on Blitzping: * Pre-Generation : All the static parts of the packet buffer get generated once, outside of the sendto() tightloop; * Asynchronous : Configuring raw sockets to be non-blocking by default; * Multithreading : Polling the same socket in sendto() from multiple threads; and * Compiler Flags : Compiling with -Ofast, -flto, and -march=native (though these actually had little effect; by this point, the bottleneck is on the Kernel's own sendto() routine). Shown below are comparisons between the three software across two CPUs (more details at the GitHub repository): # Quad-Core "Rockchip RK3328" CPU @ 1.3 GHz. (ARMv8-A) # +--------------------+--------------+--------------+---------------+ | ARM (4 x 1.3 GHz) | nping | hping3 | Blitzping | +--------------------+ -------------+--------------+---------------+ | Num. Instances | 4 (1 thread) | 4 (1 thread) | 1 (4 threads) | | Pkts. per Second | ~65,000 | ~80,000 | ~275,000 | | Bandwidth (MiB/s) | ~2.50 | ~3.00 | ~10.50 | +--------------------+--------------+--------------+---------------+ # Single-Core "Qualcomm Atheros QCA9533" SoC @ 650 MHz. (MIPS32r2) # +--------------------+--------------+--------------+---------------+ | MIPS (1 x 650 MHz) | nping | hping3 | Blitzping | +----------------------+------------+--------------+---------------+ | Num. Instances | 1 (1 thread) | 1 (1 thread) | 1 (1 thread) | | Pkts. per Second | ~5,000 | ~10,000 | ~25,000 | | Bandwidth (MiB/s) | ~0.20 | ~0.40 | ~1.00 | +--------------------+--------------+--------------+---------------+ I tested Blitzping against both hpign3 and nping on two different routers, both running OpenWRT 23.05.03 (Linux Kernel v5.15.150) with the "masquerading" option (i.e., NAT) turned off in firewall; one device was a single-core 32-bit MIPS SoC, and another was a 64-bit quad-core ARMv8 CPU. On the quad-core CPU, because both hping3 and nping were designed without multithreading capabilities (unlike Blitzping), I made the competition "fairer" by launching them as four individual processes, as opposed to Blitzping only using one. Across all runs and on both devices, CPU usage remained at 100%, entirely dedicated to the currently running program. Finally, the connection speed itself was not a bottleneck: both devices were connected to an otherwise-unused 200 Mb/s (23.8419 MiB/s) download/upload line through a WAN ethernet interface. It is important to note that Blitzping was not doing any less than hping3 and nping; in fact, it was doing more. While hping3 and nping only randomized the source IP and port of each packet to a fixed address, Blitzping randomized not only the source port but also the IP within an CIDR range---a capability that is more computionally intensive and a feature that both hping3 and nping lacked in the first place. Lastly, hping3 and nping were both launched with the "best-case" command-line parameters as to maximize their speed and disable runtime stdio logging. https://ift.tt/sC8kBnZ July 15, 2024 at 02:28PM
Show HN: Product Hunt for Music https://ift.tt/6HvlfXL
Show HN: Product Hunt for Music https://tracklist.it/ July 18, 2024 at 01:01AM
Tuesday, July 16, 2024
Show HN: My website that lets you talk to historical characters https://ift.tt/jmTIrM0
Show HN: My website that lets you talk to historical characters Hey guys, I am currently in high school and I wanted some feedback on the website that I built that lets you talk to historical characters and learn that way. https://ift.tt/ErhKwgD July 17, 2024 at 03:27AM
Subscribe to:
Posts (Atom)
Show HN: C.O.R.E – Opensource, user owned, shareable memory for Claude, Cursor https://ift.tt/VogWu3E
Show HN: C.O.R.E – Opensource, user owned, shareable memory for Claude, Cursor Hi HN, I keep running in the same problem of each AI app “rem...
-
Show HN: Locksmith – detect locks taken by Postgres migrations https://ift.tt/0cBueJt February 10, 2025 at 02:26AM
-
Show HN: I built a FOSS tool to run your Steam games in the Cloud I wanted to play my Steam games but my aging PC couldn’t keep up, so I bui...
-
Show HN: TNX API – Natural Language Interactions with Your Database Hey HN! I built TNX API to make working with databases as simple as aski...