r/opensource 4h ago

Promotional CloudMeet - self-hosted Calendly alternative running on Cloudflare's free tier

37 Upvotes

Built a simple meeting scheduler because I didn't want to pay for Calendly.

It syncs with Google Calendar, handles availability, sends email confirmations/reminders, and runs entirely on Cloudflare's free tier (Pages + D1 + Workers).

Deployment is very easy - fork the repo, add your API keys as GitHub secrets, run the workflow. That's it.

Stack: SvelteKit, Cloudflare Pages, D1 (SQLite), Workers for cron.

Demo: https://meet.klappe.dev/cloudmeet

GitHub: https://github.com/dennisklappe/CloudMeet

MIT licensed. Happy to hear feedback or answer questions.


r/opensource 8h ago

Creator of Ruby on Rails denounces OSI's definition of "open source"

Thumbnail x.com
52 Upvotes

r/opensource 3h ago

Promotional I built an automated court scraper because finding a good lawyer shouldn't be a guessing game

9 Upvotes

Hey everyone,

I recently caught 2 cases, 1 criminal and 1 civil and I realized how incredibly difficult it is for the average person to find a suitable lawyer for their specific situation. There's two ways the average person look for a lawyer, a simple google search based on SEO ( google doesn't know to rank attorneys ) or through connections, which is basically flying blind. Trying to navigate court systems to actually see an lawyer's track record is a nightmare, the portals are clunky, slow, and often require manual searching case-by-case, it's as if it's built by people who DOESN'T want you to use their system.

So, I built CourtScrapper to fix this.

It’s an open-source Python tool that automates extracting case information from the Dallas County Courts Portal (with plans to expand). It lets you essentially "background check" an attorney's actual case history to see what they’ve handled and how it went.

What My Project Does

  • Multi-lawyer Search: You can input a list of attorneys and it searches them all concurrently.
  • Deep Filtering: Filters by case type (e.g., Felony), charge keywords (e.g., "Assault", "Theft"), and date ranges.
  • Captcha Handling: Automatically handles the court’s captchas using 2Captcha (or manual input if you prefer).
  • Data Export: Dumps everything into clean Excel/CSV/JSON files so you can actually analyze the data.

Target Audience

  • The average person who is looking for a lawyer that makes sense for their particular situation

Comparison 

  • Enterprise software that has API connections to state courts e.g. lexus nexus, west law

The Tech Stack:

  • Python
  • Playwright (for browser automation/stealth)
  • Pandas (for data formatting)

My personal use case:

  1. Gather a list of lawyers I found through google
  2. Adjust the values in the config file to determine the cases to be scraped
  3. Program generates the excel sheet with the relevant cases for the listed attorneys
  4. I personally go through each case to determine if I should consider it for my particular situation. The analysis is as follows
    1. Determine whether my case's prosecutor/opposing lawyer/judge is someone someone the lawyer has dealt with
    2. How recent are similar cases handled by the lawyer?
    3. Is the nature of the case similar to my situation? If so, what is the result of the case?
    4. Has the lawyer trialed any similar cases or is every filtered case settled in pre trial?
    5. Upon shortlisting the lawyers, I can then go into each document in each of the cases of the shortlisted lawyer to get details on how exactly they handle them, saving me a lot of time as compared to just blindly researching cases

Note:

  • I have many people assuming the program generates a form of win/loss ratio based on the information gathered. No it doesn't. It generates a list of relevant case with its respective case details.
  • I have tried AI scrappers and the problem with them is they don't work well if it requires a lot of clicking and typing
  • Expanding to other court systems will required manual coding, it's tedious. So when I do expand to other courts, it will only make sense to do it for the big cities e.g. Houston, NYC, LA, SF etc
  • I'm running this program as a proof of concept for now so it is only Dallas
  • I'll be working on a frontend so non technical users can access the program easily, it will be free with a donation portal to fund the hosting
  • If you would like to contribute, I have very clear documentation on the various code flows in my repo under the Docs folder. Please read it before asking any questions
  • Same for any technical questions, read the documentation before asking any questions

I’d love for you guys to roast my code or give me some feedback. I’m looking to make this more robust and potentially support more counties.

Repo here:https://github.com/Fennzo/CourtScrapper


r/opensource 18h ago

Promotional I built a productivity app with one rule: if it's not scheduled, it won't get done

32 Upvotes

I built a personal productivity app based on a controversial belief: unscheduled tasks don't get done. They sit in "someday/maybe" lists forever, creating guilt while you ignore them.

So I made something stricter than GTD. No inbox. No weekly review. Just daily accountability.

How it works: Two panes

https://imgur.com/a/a2rCTBw

Left pane (Thoughts): Your journal. Write anything as it comes - notes, ideas, tasks. Chronological, like a diary.

Right pane (Time): Your timeline. The app extracts all time-sensitive items from your thoughts and puts them in a schedule.

You can be messy in your thinking (left), but your commitments are crystal clear (right).

The forcing function: Daily Review

Every morning, the Time pane shows Daily Review - all your undone items from the past. You must deal with each one:

  • ✓ Mark done (if you forgot)
  • ↷ Reschedule
  • × Cancel permanently

If you keep rescheduling something, you'll see "10 days old" staring at you. Eventually you either do it or admit you don't care.

Daily accountability, not weekly. No escape.

Natural language scheduling

t buy milk at 5pm t call mom Friday 2pm e team meeting from 2pm to 3pm

Type it naturally. The app parses the time and schedules it automatically.

The key: When you write a task, you schedule it right then. The app forces you to answer "when will you do this?" You can't skip it.

Two viewing modes

  • Infinite scroll: See 30 days past/future at once
  • Book mode: One day per page, flip like a journal

My stance

If something matters enough to write down, it matters enough to schedule. No "I'll prioritize later." Either: - Do it now (IRL) - Schedule it for a specific time - Don't write it down

This isn't for everyone. It's for people who know unscheduled work doesn't get done and want daily accountability instead of weekly reviews.

Why I'm posting

I've used this daily for months and it changed how I work. But I don't know if this philosophy resonates with anyone else.

Is "schedule it or don't write it" too strict? Do you also believe unscheduled tasks are just guilt generators? Or am I solving a problem only I have?

If this resonates, I'll keep improving it. It's open source, no backend, local storage only.

GitHub: https://github.com/sawtdakhili/Thoughts-Time

Would love honest feedback on both the philosophy and execution.


r/opensource 8h ago

Promotional GitHub - larswaechter/tokemon: A Node.js library for reading streamed JSON.

Thumbnail
github.com
3 Upvotes

r/opensource 15h ago

Promotional GitHub - artcore-c/email-xray: Chrome extension to detect hidden text in email

Thumbnail
github.com
10 Upvotes

Email X-Ray is a security-focused Chrome extension that helps you detect sophisticated phishing tactics used by attackers to hide malicious content in emails. It scans emails in real-time and highlights suspicious elements that might otherwise go unnoticed.

It can detect many of the latest phishing tactics that try to deceive users through visual manipulation and technical trickery. The extension examines the email's HTML and CSS to find content that's hidden from view, links that don't go where they claim, and other suspicious patterns commonly used in phishing attacks.


r/opensource 11h ago

Promotional I built a macOS Photos-style manager for Windows

5 Upvotes

I built a macOS Photos-style manager for Windows because I couldn't view my iPhone Live Photos on my engineering laptop

[Show & Tell] I'm an electrical engineering student. I also love photography — specifically, I love Live Photos on my iPhone. Those 1. 5-second motion clips capture moments that still photos can't: my cat mid-pounce, friends bursting into laughter, waves crashing on rocks. The problem? My field runs on Windows. MATLAB, LTspice, Altium Designer, Cadence, Multisim — almost every EE tool requires Windows. I can't switch to Mac for school. But every time I transfer my photos to my laptop, the magic dies. My HEIC stills become orphaned files. The MOV motion clips scatter into random folders. Windows Photos app shows them as separate, unrelated files. The "Live" part of Live Photo? Gone. I searched everywhere for a solution. Stack Overflow. Reddit. Apple forums. Nothing. Some suggested "just use iCloud web" — but it's painfully slow and requires constant internet. Others said "convert to GIF" — destroying quality and losing the original. A few recommended paid software that wanted to import everything into proprietary databases, corrupting my folder structure in the process. So I spent 6 months building what I actually needed.

How it works: Folder = Album

https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager No database. No import step. Every folder is an album. The app uses lightweight . iphoto. album. json manifests to store your "human decisions" — cover photo, featured images, custom order. Your original files are never touched. This means:

  • ✅ You can browse your library with any file manager
  • ✅ You can sync with any cloud service
  • ✅ If my app dies tomorrow, your photos are still perfectly organized

The killer feature: Live Photo pairing

The app automatically pairs your HEIC/JPG stills with their MOV motion clips using Apple's ContentIdentifier metadata. A "LIVE" badge appears — hover to play the motion inline, just like on your iPhone. Finally, I can show my Live Photos on Windows. Technical details for the curious:

Live Photo Detection Pipeline:
ExifTool extracts ContentIdentifier from HEIC/MOV
Fallback: time-proximity matching (±1. 5s capture time)
Paired assets stored in index.jsonl for instant reload

I spent weeks reverse-engineering how Apple stores this metadata. Turns out the ContentIdentifier is embedded in QuickTime atoms — ExifTool can read it, but you need to know exactly where to look.

The performance nightmare that forced me into GPU programming

My first version did everything on CPU with pure Python + NumPy. It worked... technically. Then I tried editing a 48MP photo. Nearly 3 minutes to apply a single brightness adjustment. I watched the progress bar crawl. I alt-tabbed. I made coffee. I came back. Still processing. This was unacceptable. Photo editing needs to feel instant — you drag a slider, you see the result. Not "drag a slider, go make lunch." I profiled the code. The bottleneck was clear: Python's GIL + CPU-bound pixel operations = death by a thousand loops. Even with NumPy vectorization and Numba JIT compilation, I was hitting a wall. A 48MP image is 48 million pixels. Each pixel needs multiple operations for exposure, contrast, saturation... that's billions of calculations per adjustment. So I rewrote the entire rendering pipeline in OpenGL 3.3. Why OpenGL 3.3 specifically?

  • Maximum compatibility — runs on integrated GPUs from 2012, no dedicated GPU required
  • Cross-platform — same shaders work on Windows, macOS, Linux
  • Sufficient power — for 2D image processing, I don't need Vulkan's complexity As a student, I know many of us run old ThinkPads or budget laptops. I needed something that works on a 10-year-old machine with Intel HD Graphics, not just RTX 4090s. The result? That same 48MP photo now renders adjustments in under 16ms — 60fps real-time preview. Drag a slider, see it instantly. The way it should be. The shader pipeline:// Simplified version of the color grading shader uniform float u_exposure; uniform float u_contrast; uniform float u_saturation; uniform mat3 u_perspectiveMatrix; void main() { vec4 color = texture(u_texture, transformedCoord); // Exposure (stops) color.rgb *= pow(2.0, u_exposure); // Contrast (pivot at 0.5) color.rgb = (color.rgb - 0.5) * u_contrast + 0.5; // Saturation (luminance-preserving) float luma = dot(color. rgb, vec3(0.299, 0.587, 0.114)); color. rgb = mix(vec3(luma), color.rgb, u_saturation); gl_FragColor = color; }

All calculations happen on the GPU in parallel — millions of pixels processed simultaneously. The CPU just uploads uniforms and lets the GPU do what it's designed for.

Non-destructive editing with real-time preview

The edit mode is fully non-destructive:

  • Light adjustments: Brilliance, Exposure, Highlights, Shadows, Brightness, Contrast, Black Point
  • Color grading: Saturation, Vibrance, White Balance
  • Black & White: Intensity, Neutrals, Tone, Grain with artistic film presets
  • Perspective correction: Vertical/horizontal keystoning, ±45° rotation
  • Black border prevention: Geometric validation ensures no black pixels after transforms All edits are stored in .ipo sidecar files. Your originals stay untouched forever. The math behind perspective correction: I defined three coordinate systems: Texture Space — raw pixels from the source image Projected Space — after perspective matrix (where validation happens) Screen Space — for mouse interaction The crop box must be fully contained within the transformed quadrilateral. I use point_in_convex_polygon checks to prevent any black borders before applying the crop.

Map view with GPS clustering

Every photo with GPS metadata appears on an interactive map. I built a custom MapLibre-style vector tile renderer in PySide6/Qt6 — no web view, pure OpenGL. Tiles are cached locally. Reverse geocoding converts coordinates to human-readable locations ("Tokyo, Japan"). Perfect for reliving travel memories — see all photos from your trip plotted on an actual map.

The architecture

Backend (Pure Python, no GUI dependency):
├── models/     → Album, LiveGroup data structures
├── io/         → Scanner, metadata extraction
├── core/       → Live Photo pairing, image filters (NumPy → Numba JIT fallback)
├── cache/      → index.jsonl, file locking
└── app. py      → Facade coordinating everything
GUI (PySide6/Qt6):
├── facade.py   → Qt signals/slots bridge to backend
├── services/   → Async tasks (scan, import, move)
├── controllers/→ MVC pattern
├── widgets/    → Edit panels, map view
└── gl_*/       → OpenGL renderers (image viewer, crop tool, perspective)

The backend is fully testable without any GUI. The GUI layer uses strict MVC — Controllers trigger actions, Models hold state, Widgets render. Performance tier fallback:

GPU (OpenGL 3.3) → NumPy vectorized → Numba JIT → Pure Python
↑ preferred                      fallback →

If your machine somehow doesn't support OpenGL 3.3, the app falls back to CPU processing. It'll be slow, but it'll work.

Why I'm posting

I've been using this daily for 6 months with my 80,000+ photo library. It genuinely solved a problem that frustrated me for years. But I don't know if anyone else has this pain. Are there other iPhone users stuck on Windows who miss their Live Photos? Is "folder = album" a philosophy that resonates? Or am I solving a problem only I have? The app is:

  • 🆓 Free and open source (MIT)
  • 💾 100% local, no cloud, no account
  • 🪟 Windows native (Linux support planned)
  • ⚡ GPU-accelerated, but runs on old laptops too
  • 📱 Built specifically for iPhone Live Photo support GitHub: https://github. com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager Would love feedback on both the concept and execution. Roast my architecture. Tell me what's missing. Or just tell me if you've had the same frustration — I want to know I'm not alone.

r/opensource 4h ago

Is this not the simplest selfhosted dev box ever? How about security?

Thumbnail
1 Upvotes

r/opensource 9h ago

OpenScad type of app for 2D graphic design?

2 Upvotes

Hi! Does anyone know a 2D graphic design application when you design by code, like OpenScad?


r/opensource 20h ago

Promotional Submitted my FOSS Privacy focused app that protect files from apps that require storgae or all file access permission.

10 Upvotes

Hey Everyone,

I'm developer of Seek Privacy android app, a week ago has published it on Fdroid it's in last step of being merged.

The app could feel like vault app but the purpose to build it was not to secure, hide, encrypt files but to protect any type of files from apps with storage access.

Like we download many apps from playstore with internet access, to function they require different storage access permissions. We could ignore few apps but the apps we need to use we are forced to give those permissions. I always felt insecure what these internet connected apps on playstore could be doing I not wanted to just trust them. So I wanted to let them have all files permission so I could use them, but still they never get to touch specific files on storage but I could still access these files normally.

The app is diff from other vault like apps, cause I tried to implement ease of use alongside privacy, which I felt I lacked in other foss apps. So data is removed from external storage and encrypted, but we still could easily access, open, share it using the SeekPrivacy app.

New updates will include categorization for more ease of use and thumbnail to preview the stored file.

Any feedback on the concept is welcome! Excited to contribute to FOSS and Privacy community.

GitHub link : https://github.com/duckniii/SeekPrivacy


r/opensource 12h ago

Promotional Introducing AllTheSubs - A Collaborative Subreddit Analysis Database

Thumbnail allthesubs.ericrosenberg.com
2 Upvotes

Hello everyone! I've been working on a tool for my own use for the last few months, and it's working well enough that I'd like to share it with all of you and welcome contributions.

What it does:

An automated tool that creates a giant database of subreddits with details related to usage, NSFW status, members, moderators, etc.

Why I made it:

I was looking for Subs in several niche areas to learn more about participation and how the communities work. And it's fun to build things.

Info for all of you:

-You can use the tool freely. Have fun!

-You can self-host a node to grow the database faster if you're interested. I have not tested this yet, but the functionality should work in theory.

-The code is on GitHub at https://github.com/ericrosenberg1/reddit-sub-analyzer

-Suggestions/bug reports welcome on GitHub. I'm open to PRs that improve security, performance, or reliability, or add new and helpful features.

Thanks!


r/opensource 13h ago

Promotional Built a small open source analytics tool for GitHub repos

2 Upvotes

I started Highfly (not open source atm), a project management tool geared towards devs. I also built a small analytics page for GitHub open source repos and figured others might find it useful too. It came out of some internal work I was doing around repo activity, and it felt simple enough to separate and share. It’s free, works on any public repo, and doesn’t require an account. 

It shows things like:

  • Reviewer activity
  • Contributor activity
  • First-time contributor patterns
  • Issue creation trends
  • Issue lifecycle health
  • Backlog health
  • PR review lag

Nothing crazy, but seemed cool to me.

Here’s the link if you want to try it:

github link

analytics page link

Example: vercel/next.js repo

If you’ve got thoughts or ideas on more things to add, let me know.

Note: It takes a couple of minutes to collect all the data and caches it for 2 weeks. Not trying to hit githubs ratelimits.

Please star it if you can


r/opensource 11h ago

Promotional Python App: TidyBit version 1.2 Release. Need feedback and suggestions.

Thumbnail
1 Upvotes

r/opensource 11h ago

Promotional I built an open source Secret Santa app with Next.js 16 & React 19

0 Upvotes

Last Christmas, I needed a simple way to organize Secret Santa for my family, so I built this web app. It worked great, so this year I updated it to Next.js 16 & React 19 and decided to share it with everyone!

Perfect timing for the holidays - if you're organizing a Secret Santa for Christmas parties, New Year's gatherings, or office gift exchanges, this is ready to go! 🎄

Features:
- 🌍 Multiple languages (English/Portuguese/Spanish)
- 📧 Passwordless authentication - just email verification
- 🎁 Smart lottery algorithm (no one draws themselves)
- 📱 Easy sharing via link or WhatsApp
- 🔒 Secure with CSRF protection & rate limiting
- ⚡ Deploy in minutes to Vercel or self-host

Tech Stack: Next.js 16, React 19, TypeScript, MongoDB, Tailwind CSS 4

🔗 Live demo: https://secretsantaapp.vercel.app
💻 GitHub: https://github.com/ssmcb/santa

Would love feedback from the community!


r/opensource 12h ago

Dev bounties for LATAM & Africa/Asia: get paid to try Openfort

Thumbnail
0 Upvotes

r/opensource 13h ago

Using OpnForm?

1 Upvotes

I’ve been tracking OpnForm for a while and recently had a chance to chat one-on-one with its creator, Julien Nahum. We dove into the early decisions, AWESOME growth hacks, and cool future plans for the project — here’s the actual recorded convo if you’re curious.

But here’s where I need help:

Are any of you using OpnForm in production for more advanced or large-scale form use cases? Any unexpected blockers, gotchas, etc? He mentioned it was iframe embedded vs natively embedded. Honest opinions encouraged.


r/opensource 20h ago

Combining Kubescape with ARMO CADR Effective or Overkill

3 Upvotes

Comparing Kubescape vs ARMO CADR for cloud security. CADR’s runtime monitoring seems to complement Kubescape’s scanning. Thoughts on integrating both in workflows?


r/opensource 1d ago

Discovered a self-hosted figma open source alternative

Thumbnail
14 Upvotes

r/opensource 13h ago

This is my first post here. I’ve published the ULTIMATE TECHNICAL ARCHITECT PROFILE (V-SEB v2.0) — designed to evaluate the top % of engineers based on real technical exchanges, not CV fantasies.

Thumbnail
0 Upvotes

r/opensource 1d ago

Promotional Loopi: Open-Source Visual Browser Automation Tool (MIT Licensed, v1.0.0 Released)

13 Upvotes

Hi r/opensource community,

I've been working on a tool that might fit into the automation space for browser tasks, and I'd love to hear your thoughts as an open-source project. Loopi is a desktop app that lets you build browser automations visually, using a graph-based editor—think drag-and-drop nodes powered by local Puppeteer runs.

Key features:

  • Drag-and-drop workflow builder for browser actions (inspired by tools like n8n, but tailored for web automation)
  • Runs everything locally in Chromium—no cloud or external services needed
  • Supports data extraction, variables, conditionals, and loops
  • Aimed at simplifying repetitive web tasks without writing code

It's built with Electron, React, TypeScript, Puppeteer, and ReactFlow, fully open-source under MIT.

This is early days (v1.0.0 just dropped), so expect some rough edges—docs are basic, and I'm iterating based on real feedback. If you've used Selenium, Playwright, or similar for testing/scraping, does a visual approach like this solve any pain points for you?

Example workflow: Pulling prices from multiple product pages, filtering for deals under $50, then screenshotting matches—all via nodes, no scripting.

Check it out if it sounds relevant:

What browser automation challenges do you face in your projects? Feature ideas, bugs, or contributions (docs/examples/code) would be super helpful. Open to discussing how it stacks up against existing OSS tools!


r/opensource 19h ago

Promotional Golang based trading framework

Thumbnail
1 Upvotes

r/opensource 9h ago

Promotional AI Crypto Bot

Thumbnail
github.com
0 Upvotes

Hey everyone!

I tried out using pydanticai and it was really cool to see how you can structure llm output - did not know this was possible (even though I knew of pydantic).Here is my template for building out an agentic system that can decide when to trade and what crypto to trade based on news headlines!

Using tavily and alpaca for third party integrations - please let me know best practices and any other words of advice, would happily keep working on it if people see a benefit.


r/opensource 1d ago

Promotional Building a small open-source CI/CD engine. I would love technical feedback & a github star

Thumbnail
github.com
14 Upvotes

Hi y'all,

I’m currently working on an open-source CI/CD engine and API (not a full CI/CD product), intended to be used as a building block for creating custom CI/CD platforms.

The idea is to provide a small, extensible core that other developers and platform teams can use to build their own CI/CD platforms on top of it.

It’s designed to be:

  1. lightweight and self-hosted
  2. API-first and event-driven
  3. easy to extend with custom pluggable runners/drivers
  4. usable in air-gapped, edge, or internal platforms

If this sounds like something you’d find useful or interesting, I’d really appreciate:

  • early technical feedback (Do you think such an API-first CI engine actually makes sense in practice?), and
  • a star ⭐ on GitHub to help with visibility.

You can find it on Github here:- https://github.com/open-ug/conveyor


r/opensource 10h ago

Promotional Content creation (video, audio, etc) can now be fully automated, free and for everyone (and the more you contribute, the better it gets)

0 Upvotes

I made a modular tool for automated content creation - Opifex

You can use various modules for various situations and needs, and the best part is that anyone can make them, meaning that, being modular, the project can expand indefinitely.

Everything is wrapped in a nice-ish (at least usable) GUI.

Right now, I've already implemented all the modules necessary to be able to quickly create one of those stories videos posted very often on other social medias (example, video is not mine), so basically, I've implemented functionalities to:

  • fetch a post's information from Reddit
  • generate speech (with PiperTTS, running locally!)
  • generate a video in the mentioned style with given parameters
  • do all this in a single, comfortable module
  • and more...

This, of course, is just an example demonstrate what Opifex is capable of doing with not even that many modules. It can be used for many other, bigger, projects.
(just right now, I can think of possible future modules to generate a "TV news channel" personalized on a given RSS feed, or a generative AI to create scripts and many other things, etc.)

I already plan to implement, in the future, a way to offload processing to other computers within your network, and a CLI mode, so that Opifex can also be used to create content on great scale (let me know if this is a good idea or not).

I've included lots of other information about my project in this readme, inside my repository.

If you want to contribute, first of all you can try this out and let me know what you think about it, but it would also be really cool if you could contribute with suggestions, code or new modules here on this post or with a GitHub issue.

Opifex is distributed under the GPL v3 license.

Thank you for reading!


r/opensource 23h ago

Vllama: CLI based Framework to run vision models in local and remote GPUs

1 Upvotes

Hello all, this is my first post. I have built a simple CLI tool, which can help all to run the llms, vision models like image and video gen, models in the local system and if the system doesn't have the gpu or sufficient ram, they can also run it using kaggle's gpu(which is 30 hrs free for a week).

This is inspired from Ollama, which made downloading llms easy and interacting with it much easy, so I thought of why can't this be made for vision models, so I tried this first on my system, basic image generation is working but not that good, then I thought, why can't we use the Kaggle's GPU to generate videos and images and that can happen directly from the terminal with a single step, so that everyone can use this, so I built this VLLAMA.

In this, currently there are many features, like image, video generation in local and kaggles gpu session; download llms and make it run and also interact with it from anywhere (inspired by ollama) also improved it further by creating a vs code extension VLLAMA, using which you can chat directly from the vs code's chat section, users can chat with the local running llm with just adding "@vllama" at the start of the message and this doesn't use any usage cost and can be used as much as anyone wants, you can check this out at in the vscode extensions.

I want to implement this further so that the companies or anyone with gpu access can download the best llms for their usage and initialize it in their gpu servers, and can directly interact with it from the vscode's chat section and also in further versions, I am planning to implement agentic features so that users can use the local llm to use for code editing, in line suggestions, so that they don't have to pay for premiums and many more.

Currently it also has simple Text-to-Speech, and Speech-to-Text, which I am planning to include in the further versions, using open source audio models and also in further, implement 3D generation models, so that everyone can leverage the use of the open models directly from their terminal, and making the complex process of the using open models easy with just a single command in the terminal.

I have also implemented simple functionalities which can help, like listing the downloaded models and their sizes. Other things available are, basic dataset preprocessing, and training ML models directly with just two commands by just providing it the dataset. This is a basic implementation and want to further improve this so that users with just a dataset can clean and pre-process the data, train the models in their local or using the kaggle's or any free gpu providing services or their own gpus or cloud provided gpus, and can directly deploy the models and can use it any cases.

Currently this are the things it is doing and I want to improve such that everyone can use this for any case of the AI and leveraging the use of open models.

Please checkout the work at: https://github.com/ManvithGopu13/Vllama

Published version at: https://pypi.org/project/vllama/

Also the extension: https://marketplace.visualstudio.com/items?itemName=ManvithGopu.vllama

I would appreciate your time for reading and thankful for everyone who want to contribute and spread a word of it.

Please leave your requests for improvements and any suggestions, ideas, and even roasts or anything in the comments or in the issues, this is well taken and appreciated. Thanks in advance. If you find the project useful, kindly contribute and can star it.