You’re missing options like rail
Cryptography nerd
You’re missing options like rail
Humans learn a lot through repetition, no reason to believe that LLMs wouldn’t benefit from reinforcement of higher quality information. Especially because seeing the same information in different contexts helps mapping the links between the different contexts and helps dispel incorrect assumptions. But like I said, the only viable method they have for this kind of emphasis at scale is incidental replication of more popular works in its samples. And when something is duplicated too much it overfits instead.
They need to fundamentally change big parts of how learning happens and how the algorithm learns to fix this conflict. In particular it will need a lot more “introspective” training stages to refine what it has learned, and pretty much nobody does anything even slightly similar on large models because they don’t know how, and it would be insanely expensive anyway.
Neither does his computer after all those viruses
Yes, but should big companies with business models designed to be exploitative be allowed to act hypocritically?
My problem isn’t with ML as such, or with learning over such large sets of works, etc, but these companies are designing their services specifically to push the people who’s works they rely on out of work.
The irony of overfitting is that both having numerous copies of common works is a problem AND removing the duplicates would be a problem. They need an understanding of what’s representative for language, etc, but the training algorithms can’t learn that on their own and it’s not feasible go have humans teach it that and also the training algorithm can’t effectively detect duplicates and “tune down” their influence to stop replicating them exactly. Also, trying to do that latter thing algorithmically will ALSO break things as it would break its understanding of stuff like standard legalese and boilerplate language, etc.
The current generation of generative ML doesn’t do what it says on the box, AND the companies running them deserve to get screwed over.
And yes I understand the risk of screwing up fair use, which is why my suggestion is not to hinder learning, but to require the companies to track copyright status of samples and inform ends users of licensing status when the system detects a sample is substantially replicated in the output. This will not hurt anybody training on public domain or fairly licensed works, nor hurt anybody who tracks authorship when crawling for samples, and will also not hurt anybody who has designed their ML system to be sufficiently transformative that it never replicates copyrighted samples. It just hurts exploitative companies.
Remember when media companies tried to sue switch manufacturers because their routers held copies of packets in RAM and argued they needed licensing for that?
https://www.eff.org/deeplinks/2006/06/yes-slashdotters-sira-really-bad
Training an AI can end up leaving copies of copyrightable segments of the originals, look up sample recover attacks. If it had worked as advertised then it would be transformative derivative works with fair use protection, but in reality it often doesn’t work that way
Apple management has explicitly stated they do not want to support better compatibility between Android and iPhone, their response when asked what parents who buy cheap Androids for their kids should do it was to buy them iPhones. Many of the problems are very easy to fix on Apple’s side and keeping them problematic is intentional.
You shouldn’t, but the patent office don’t care about inventive height and obviousness anymore
Sounds like eminent domain talk if you think there’s enough suitable available homes already
The parts are available, but I don’t know if there’s a way to send them regular stereo channel inputs without hardware hacking or writing custom software drivers from scratch.
My Xperia 1 III used to be quite disappointing at times (was too focused on RAW output for editing, even stacked HDR shot RAWs) but the 1 V is legit good and I can tell the new sensor stacking improved light capture (less noise in low light) and auto mode is much better, while I still see limitations both in auto and manual it’s not so bad. The most annoying parts have to do with focus and color balance when zooming in certain light conditions, and contrast in complex scenes in auto mode.
They still exist, although aren’t as common. Plenty of places have them if you order online
Seems to be rather unique to them.
https://www.valvesoftware.com/en/index/deep-dive/ear-speakers
Usually one bud is the primary one which connects to the phone and maintains the link. Then it pairs with the other and relays the Bluetooth session encryption key so the second bud can play it’s part of the audio
Have neckband Bluetooth headphones of various kinds too (I’ll never ever use those tiny plugs, I’d be worried about losing them and chances are they won’t fit well). Got a regular sport model, and recently got a cheap air conduction headset too.
Or get a phone who still has the port from a company like Sony or Asus or whatever
https://comptroller.nyc.gov/reports/spotlight-new-york-citys-housing-supply-challenge/ 🤷
I don’t disagree with the rest, walkable cities are important, speculators shouldn’t be involved in housing, etc. But some places genuinely have a lack of available housing and the solution is to build away.
And where are they located, and why are they empty?
There’s your next big problem, a significant fraction of them aren’t where people want (or need) to be, or are vacation homes and don’t belong in these stats (unless you want to eminent domain them). Suburbs and ghost towns and remote regions pushes the average up.
https://todayshomeowner.com/general/guides/highest-home-vacancy-rates/
Wine/Proton on Linux occasionally beats Windows on the same hardware in gaming, because there’s inefficiencies in the original environment which isn’t getting replicated unnecessarily.
It’s not quite the same with CPU instruction translation, but the main efficiency gain from ARM is being designed to idle everything it can idle while this hasn’t been a design goal of x86 for ages. A substantial factor to efficiency is figuring out what you don’t have to do, and ARM is better suited for that.
It’s not that uncommon in specialty hardware with CPU instructions extensions for a different architecture made available specifically for translation. Some stuff can be quite efficiently translated on a normal CPU of a different architecture, some stuff needs hardware acceleration. I think Microsoft has done this on some Surface devices.
Zeno says hi