Reddit Programming – Telegram
Reddit Programming
211 subscribers
1.22K photos
125K links
I will send you newest post from subreddit /r/programming
Download Telegram
Tilesets and Makefiles
https://www.reddit.com/r/programming/comments/1pa1olk/tilesets_and_makefiles/

<!-- SC_OFF -->I decided to automate my whole tilemap generation using only image magick and a makefile. Hopefully someone could use this in their projects :) <!-- SC_ON --> submitted by /u/countkillalot (https://www.reddit.com/user/countkillalot)
[link] (https://yasendinkov.com/posts/tilesets/) [comments] (https://www.reddit.com/r/programming/comments/1pa1olk/tilesets_and_makefiles/)
Documentation Exporter (For pasting into LLMs)
https://www.reddit.com/r/programming/comments/1pa1rec/documentation_exporter_for_pasting_into_llms/

<!-- SC_OFF -->Hey, I built a tool that automatically exports every documentation page from any Mintlify site into markdown. No more manually copying pages one by one. You can grab full docs for things like the Anthropic API or TensorZero and drop them straight into an LLM. It’s live at docs-exporter (https://otso.veistera.com/docs-exporter/), and the source code is on Github (https://github.com/OtsoBear/docs-exporter.git). Built it to dump docs to models for myself, and posting it here if it could be useful for someone else! <!-- SC_ON --> submitted by /u/OtsoBear (https://www.reddit.com/user/OtsoBear)
[link] (https://otso.veistera.com/docs-exporter/) [comments] (https://www.reddit.com/r/programming/comments/1pa1rec/documentation_exporter_for_pasting_into_llms/)
Technical Design Documents - Part 1 - Case-Study 1
https://www.reddit.com/r/programming/comments/1papdni/technical_design_documents_part_1_casestudy_1/

<!-- SC_OFF -->On this video I talk about Technical Design Documents, a kind of artifacts which are key to the success of any software development project. To give context to the lesson I use the technical design document that I created for my Cloud-Based Multi-Service Platform for Smart Event Management case-study project . <!-- SC_ON --> submitted by /u/ZoePsomi (https://www.reddit.com/user/ZoePsomi)
[link] (https://www.youtube.com/watch?v=bW7mVe3dn2o&list=PLgcaIrgxzJn_IScY-VDAT9pqdS82HKVjV&index=2) [comments] (https://www.reddit.com/r/programming/comments/1papdni/technical_design_documents_part_1_casestudy_1/)
httpp - tiny, fast header only http 1.1 parser library in c
https://www.reddit.com/r/programming/comments/1pasabh/httpp_tiny_fast_header_only_http_11_parser/

<!-- SC_OFF -->Pretty fast and easy to extend zero-allocation parsing library. If you wish to learn about pointer arithmetic, this one definitely will help! <!-- SC_ON --> submitted by /u/Born_Produce9805 (https://www.reddit.com/user/Born_Produce9805)
[link] (https://github.com/cebem1nt/httpp) [comments] (https://www.reddit.com/r/programming/comments/1pasabh/httpp_tiny_fast_header_only_http_11_parser/)
Developers Have Nothing To Fear From Generative AI
https://www.reddit.com/r/programming/comments/1patcfs/developers_have_nothing_to_fear_from_generative_ai/

<!-- SC_OFF -->Generative AI is prolific. However the hype around it taking every job is sorely misplaced.
I discuss what areas will see the greatest impact from the use of generative AI and many of the possible ways it will effect our lives. <!-- SC_ON --> submitted by /u/friendly-devops (https://www.reddit.com/user/friendly-devops)
[link] (http://youtube.com/watch?v=DRGA5uPxzEU&feature=youtu.be) [comments] (https://www.reddit.com/r/programming/comments/1patcfs/developers_have_nothing_to_fear_from_generative_ai/)
Freecode editor
https://www.reddit.com/r/programming/comments/1pb094q/freecode_editor/

<!-- SC_OFF -->I tried this online code editor today, and honestly it wasn’t bad at all. It’s simple, fast, and gets the job done if you just want to test some Python/Java/JS quickly. I’m curious: has anyone else tried FreeCodeEditor? What do you think about it? Do you see potential or features it should add? <!-- SC_ON --> submitted by /u/Shot-Chair-5635 (https://www.reddit.com/user/Shot-Chair-5635)
[link] (https://www.freecodeditor.com/) [comments] (https://www.reddit.com/r/programming/comments/1pb094q/freecode_editor/)
r/Spigen Data Scaping
https://www.reddit.com/r/programming/comments/1pb23cc/rspigen_data_scaping/

<!-- SC_OFF -->I am looking for someone to create an excel sheet on the last 500 or so post made in r/Spigen (https://www.reddit.com/r/Spigen). This is for a research topic, but I am unfimiliar with data scraping. I am looking for identifiers such as "id": submission.id, "noscript": submission.noscript, "author": str(submission.author), "score": submission.score, "num_comments": submission.num_comments, "created_utc": submission.created_utc, "url": submission.url, "selftext": submission.selftext Co pilot has also provided this code but I do not know how to run it. Please help! import praw import pandas as pd # Authenticate with Reddit API reddit = praw.Reddit( client_id="YOUR_CLIENT_ID", # replace with your client_id client_secret="YOUR_CLIENT_SECRET",# replace with your client_secret user_agent="spigen_scraper" # short denoscription of your app ) # Choose subreddit subreddit = reddit.subreddit("Spigen") # Collect last 500 posts posts = [] for submission in subreddit.new(limit=500): posts.append({ "id": submission.id, "noscript": submission.noscript, "author": str(submission.author), "score": submission.score, "num_comments": submission.num_comments, "created_utc": submission.created_utc, "url": submission.url, "selftext": submission.selftext }) # Convert to DataFrame df = pd.DataFrame(posts) # Save to Excel df.to_excel("spigen_reddit_posts.xlsx", index=False) <!-- SC_ON --> submitted by /u/seanmi24 (https://www.reddit.com/user/seanmi24)
[link] (https://www.reddit.com/r/Spigen/) [comments] (https://www.reddit.com/r/programming/comments/1pb23cc/rspigen_data_scaping/)
Python solution to extract all tables PDFs and save each table to its own Excel sheet
https://www.reddit.com/r/programming/comments/1pb3mwf/python_solution_to_extract_all_tables_pdfs_and/

<!-- SC_OFF -->Hi everyone, I’m working with around multiple PDF files (all in English, mostly digital). Each PDF contains multiple tables. Some have 5 tables, others have 10–20 tables scattered across different pages. I need a reliable way in Python (or any tool) that can automatically: Open every PDF Detect and extract ALL tables correctly (including tables that span multiple pages) Save each table into Excel, preferably one table per sheet (or one table per file) Does anyone know the best working solution for this kind of bulk table extraction? I’m looking for something that “just works” with high accuracy. Any working code examples, GitHub repos, or recommendations would save my life right now! Thank you so much! 🙏 <!-- SC_ON --> submitted by /u/CalendarOk67 (https://www.reddit.com/user/CalendarOk67)
[link] (https://pypi.org/project/pytesseract/) [comments] (https://www.reddit.com/r/programming/comments/1pb3mwf/python_solution_to_extract_all_tables_pdfs_and/)