There should be, that’s just how fiber works. If they lay a 10 Gb line in the street, they’ll probably sell a 1 Gb connection to a 100 households. (Margins depend per provider and location)
If they give you an uncapped connection to the entire wire, you’ll DoS the rest of the neighborhood
That’s why people are complaining “I bought 1Gb internet, but I’m only getting 100Mb!” - They oversold bandwidth in a busy area. 1Gb would probably be the max speed if everyone else was idle. If they gave everyone uncapped connections the problem would get even worse
I believe there are a large number of feature requests on Lemmy’s GitHub page, making it difficult for developers to prioritize what’s truly important to users.
Github issues are annoying that way. You could solve it by closing down “issues” and using discussions instead. People can up and downvote discussions, and you can see that from the listview, unlike with issues.
And you can have threaded conversations in discussions.
I assume they’re talking about this api
Any tools that interface well with it?
Lots of tools, but it depends on where you want to use it. For example, inside Obsidian you can use it as a text generator
Inside VSCode you can use something like AI Genie
If you just want to use it raw, you can use postman
I think I couple of those points come down to the tech-lead to write a “Definition of Done”
1 - This is useful for junior members or new members to know what to expect. For example, “Definition of Done” would include that a new function should be documented, it should be refactored into clean code by the code standards, it should be tested by QA, there should be unittests covering the function, the function should be released to production to be marked as done - etc
2 - When giving managers that don’t know anything about coding an estimation - either by you, or by someone in your team - they can take the Definition of Done" as a reference point. If a manager asks how long something will take, you don’t just consider “Oh, I guess I can build this in a couple of days”. Yea ok sure, you can build it to meet the managers minimal requirements for the function to kinda work but its messy code and untested - so if you keep in mind that there are loads of other meta-things to do besides just building code, you can pretty much double your initial estimation
Otherwise you just accumulate more and more technical dept, and at some point your “just build it” estimation gets inflated because for every change you have to touch lots of 1000 line files, figure out what broke by the changes, fix that, see what failed now, etc etc
And it would have been better in the long run if you would have spend more time on it while you were working on the function
Do you mean their code is already setup with some kind of output to terminal that you can use to add a unit test into as well?
I don’t even recall what I was messing with awhile back, I think it was Python, but adding a simple print test didn’t work. I have no idea how they were redirecting print(), but that was a wall I didn’t get past at the time.
Yea, probably not every language has a concept of unittests, but basically test code.
Like if you have a calculator, there would be a test (outside of the real project) of like
If Calculator.Calculate(2 + 2) then assert outcome = 4
That way - if lets say the calculator only does +
operations - you could still copy that test line, and create a new test of
If Calculator.Calculate(5 * 5) then assert outcome = 25
Your test will fail initially, but you can just run through it in a debugger, step into the code, figure out where it’s most appropriate to add a *
operator function, and then implement it, and see your test success.
Other benefit of that is that if you submit your change as PR, with the test the repo maintainer doesn’t have to determine whether your code actually works just by looks, or actually running the calculator, your test proves you’ve added something useful that works (and you didn’t break the existing tests)
That stuff seems like a language on top of a language for me, and when it errors I get really lost.
If you’re just programming for yourself without the intent to submit it for a PR, you can just throw away the linter file. But I mentioned it was good to have in a project, because if there were multiple people working on it, all with their own style, the code can become a mess quite fast
I get sick of something that annoys me and want to go in and fix the issue despite being completely unqualified, but naive enough to try.
Well, I mean, that’s basically all the things right? You start completely unqualified, mess around for a while, and after a while you’re them more-qualified next time…
With stuff like Marlin, I seem to like the hardware side of things.
Just messing around with stuff you like is a good way to learn - though in my experience doing anything with hardware is way more difficult than just plain software. If you have to interface with hardware its very often pretty obscure stuff, like sending the correct hardware instructions to a driver, or to just “hardware pins” even… Like trying to start modifying a driver as a kind of starter project doesn’t sound like something I’d recommend
Generally mostly by cyclomatic complexity:
How big are the methods overall
Do methods have a somewhat single responsibility
How is the structure, is everything inner-connected and calling each other, or are there some levels of orchestration?
Do they have any basic unittests, so that if I want to add anything, I can copypaste some test with an entrypoint close to my modifation to see how things are going
Bonus: they actually have linter configuration in their project, and consistent commonly used style guidelines
If the code-structure itself is good, but the formatting is bad, I can generally just run the code though a linter that fixes all the formatting. That makes it easier to use, but probably not something I’d actually contribute PRs to
How do you learn to spot these situations before diving down the rabbit hole? Or, to put it another way, what advice would you give yourself at this stage of the learning curve?
Probably some kind of metric of “If I open this code in an IDE, and add my modification, how long will it take before I can find a suitable entrypoint, and how long before I can test my changes” - if it’s like half a day of debugging and diagnostics before I even can get started trying to change anything, it’s seems a bit tedious
Edit: Though also, how much time is this going to save you if you do implement it? If it saves you weeks of work once you have this feature, but it takes a couple of days, I suppose it’s worth going though some tedious stuff.
But then again: I’d also check: are there other similar libraries with “higher scoring” “changeability metrics”
So in your specific case:
I wanted to modify Merlin 3d printer firmware
Is there any test with a mocked 3d printer to test this, or is this a case of compiling a custom framework, installing it on your actual printer, potentially bricking it if the framework is broken - etc etc
Ok, sure. So in a tech race, if energy is a bottleneck - and we’d be pouring $7tn into tech here - don’t you think some of the improvements would be to Power usage effectiveness (PUE) - or a better Compute per Power Ratio?
What benefits to “AI supremacy” are there?
I wasn’t saying there was any, I was saying there are benefits to the race towards it.
In the sense of - If you could pick any subject that world governments would be in a war about - “the first to the moon”, “the first nuclear” or “first hydrogen bomb”, or “the best tank” - or “the fastest stealth air-bomber”
I think if you picked a “tech war” (AI in this case) - Practically a race of who could build the lowest nm fabs, fastest hardware, and best algorithms - at least you end up with innovations that are useful
For all our sakes, pray he doesn’t get it
It doesn’t really go into why not.
If governments are going to be pouring money into something, I’d prefer it to be in the tech industry.
Imagine a cold-war / Oppenheimer situation where all the governments are scared that America / Russia / UAE will reach AI supremacy before {{we}} do? Instead of dumping all the moneyz into Lockheed Martin or Raytheon for better pew pew machines - we dump it into better semiconductor machinery, hardware advancements, and other stuff we need for this AI craze.
In the end we might not have a useful AI, but at least we’ve made progression in other things that are useful
Well @ @TheGrandNagus and @SSUPII - I think a lot of Firefox users are power users. And a lot of the non-power Firefox users, like my friends and family, they’re only using Firefox because I recommended them to use it, and I installed all the appropriate extensions to optimize their browser experience.
So if Firefox alienates the power users - who are left? I’m gonna move on to Waterfox or Librewolf, but they are even more next-level obscure browsers. My non-tech friends know about Chrome, Edge, and Firefox, so I can convince them to use one of those… But I kinda doubt I can get them to use Librewolf. If I tell them Firefox sucks now too, they’ll probably default to chrome
If AI integration is to happen […], then this to me seems to be the best way to do it.
Well, to me the best way to do it would be for Mozilla to focus on being the best bare-bone, extendable browser.
Then - if people want an AI in their browser - people should be able to install an AI extension that does these things. It’s a bit annoying they’re putting random stuff like Pocket, and now an AI in the core of the browser, instead of just making it an option to install extendable
So the full story would be that Elon stayed up until 5:30 a.m playing Elden Ring in a Vancouver hotel - was very stressed, saw on Twitter that people knew he was raging in Vancouver based on the Jet Tracker - stressing him out even more -
Though “Fuck it, maybe I can’t beat Malenia, but at least I can beat this asshat on Twitter tracking me!”
…If only FromSoftware had added some pay-to-win elements… Like “For A Small $1 billion Micro-Transaction you get the uber Malenia slayer sword!” -
We would be living in a totally different timeline
I suppose it’s not allowed them. That kind of sucks, it is pretty convenient to just use a replicate.com machine and use a large image model kinda instantly. Or spin up your own machine for a while if you need lots of images without a potential cold-start or slow usage on shared machines
I wonder why they chose this license, because the common SD license basically lets you do whatever you want
Well I have Copilot Pro, but I was mainly talking about GitHub Copilot. I don’t think having the Copilot Pro really affects Copilot performance.
I meanly use AI for programming, and (both for myself to program and inside building an AI-powered product) - So I don’t really know what you intend to use AI for, but outside of the context of programming, I don’t really know about their performance.
And I think Copilot Pro just gives you Copilot inside office right? And more image generations per day? I can’t really say I’ve used that. For image generation I’m either using the OpenAI API again (DALL-E 3), or I’m using replicate (Mostly SDXL)
This model is being released under a non-commercial license that permits non-commercial use only.
Hmm, I wonder whether this means that the model can’t be run under replicate.com or mage.space.
Is it commercial use if you have to pay for credits/monthly for the machines that the models are running on?
Like is “Selling the models as a service” commercial use, or can’t the output of the models be used commercially?
I use Copilot, but dislike it for coding. The “place a comment and Copilot will fill it in” barely works, and is mostly annoying. It works for common stuff like “// write a function to invert a string” that you’d see in demos, that are just common functions you’d otherwise copypaste from StackOverflow. But otherwise it doesn’t really understand when you want to modify something. I’ve already turned that feature off
The chat is semi-decent, but the “it understands the entire file you have open” concept also only just works half of time, and so the other half it responds with something irrelevant because it didn’t get your question based on the code / method it didn’t receive.
I opted to just use the OpenAI API, and I created a slack bot that I can chat with (In a slack thread it works the same as in a “ChatGPT context window”, new messages in the main window are new chat contexts) - So far that still works best for me.
You can create specific slash-commands if you like that preface questions, like “/askcsharp” in slack would preface it with something like “You are an assistant that provides C# based answers. Use var for variables, xunit and fluentassertions for tests”
If you want to be really fancy you can even just vectorize your codebase, store it in Pinecone or PGVector, and have an “entire codebase aware AI”
It takes a bit of time to custom build something, but these AIs are basically tools. And a custom build tool for your specific purpose is probably going to outperform a generic version
This situation is due to npm’s policy shift following the infamous “left-pad” incident in 2016, where a popular package left-pad was removed, grinding development to a halt across much of the developer world. In response, npm tightened its rules around unpublishing, specifically preventing the unpublishing of any package that is used by another package.
This already seems like a pretty strange approach, and takes away agency from package maintainers. What if you accidentally published something you want to remove…? It kind of turns npm into a very centralized system.
If they don’t want to allow hard-removals because of this, why not let people unpublish packages into a soft/hidden state instead? Maybe mark them with the current dependencies, but don’t allow new ones - or something
I prefer the approach of Azure DevOps more. When you publish any nuget, or npm into their system, the entire package dependency tree is pulled in and backed up there. So you don’t rely on NPM anymore to keep your referenced packages safe
I use it to backup my save games. Not sure if that’s conventional.
For example, I’d MKLink %appdata%/Local/Pal/Save/
to a folder in my save repo, and commit that every once in a while.
Yea, noticed that last week. Is already fixed again in latest revanced.
Delete microG, revanced manager, and YouTube revanced
Download and install the new gmscore, which replaces microG: https://github.com/ReVanced/GmsCore/releases/tag/v0.3.1.4.240913
Download and install latest version of Revanced Manager: https://github.com/ReVanced/revanced-manager/releases/tag/v1.20.1
Download and install YouTube 19.09.37 from APKmirror: https://www.apkmirror.com/apk/google-inc/youtube/youtube-19-09-37-release/youtube-19-09-37-android-apk-download/