MiniDisc has the advantage of looking very cool. Hi-MD can store a whole GB which is enough even for uncompressed PCM
MiniDisc has the advantage of looking very cool. Hi-MD can store a whole GB which is enough even for uncompressed PCM
Mir was pretty big, had some coed crews and featured some pretty long missions. So seems possible it happened before ISS.
FWIW, at least on Android I was able to upload a full-res image. I know we do some client-side format conversion on mobile since our desktop clients generally can't display HEIC/HEIF images. Possible resizing occurs in some cases, but not sure. Haven't heard of any recent changes there though
Hmm, I did a quick test with a 4624x3472 JPEG and it seemed to round-trip more or less unscathed. Some EXIF data was stripped, but that's intentional to avoid privacy leaks (and has been present for a long time). The in-app view is downscaled, but that's always been the case AFAIK
My maternal grandfather was a domestic missionary when my mother and her siblings were born and then worked at a bottle cap printing press. My maternal grandfather lived further away and died when I was young so I didn't know him well, but he ran a restaurant with my uncle before he retired
Bay Area folks: BART MART popup today at the Downtown Berkeley station from 3-6pm featuring work from local artists bartmart.notion.site/2feb53326cb2...
cc @brandon.insertcredit.com
In a sense, but probably not the way you're thinking. All digital cameras do a certain amount of image processing on the raw sensor data. Smartphone cameras do a lot of it and modern ones do use ML-based processing. This tends to have funny results for small text
My city claims that shutting a brand new bike lane for 2.5 years and dumping riders onto a high volume street is fine but also βcyclist safety remains non-negotiableβ.
Yeah, I can't find datasheets for the TI chips used in the 100, but the service manual notes that game values are represented as analog voltages. Even more obvious on the original Odyssey schematic. Not nearly enough transistors for it to be digital. Pong schematic uses a lot of digital logic gates
This is also a pretty common printing artifact. Fonts intended for print (as opposed to on screen) have to deal with this. You can solve it on the source side by thinning out the parts where you see the most gain (like concave corners).
This looks like registration error which is a fairly common artifact with 4 color offset printing. Simplest solution is to probably just to use pure CMYK black rather than "rich" black which uses the other colors
This is notably about twice the US average for operating costs of fully depreciated US Nuclear plants ($34/MWh). Is this just due to high cost of labor in the state or is there something else driving it to be so much worse than average? To be clear, not arguing it should have shut down in 2025.
Screenshot of a graph from a recent Lazard LCOE+ report. Utility Solar+Storage has a range of $50-$131/MWh and Onshore Wind+Storage has a range of $44-$123/MWh. The average for fully depreciated US Nuclear like Diablo Canyon is $34/MWh
It's kind of wild how much it's costing to keep Diablo Canyon open. PG&E estimates about $8.4 billion over 6 years. The plant produces a relatively constant 2.258 MW so assuming it has no downtime over that period, it's around $70/MWh which is around the LCOE for new solar+storage or wind+storage
Yeah, a lot of solar got added from 2022-2024 (August peak output of 13.8 MW to 19.65 MW), but it slowed down after that rather than accelerating (August peak was 21.69 MW, so only about 2 MW more). I think storage needs to catch up so there's less curtailment before you see a lot more solar gen
California's nuclear power is all from a single plant, Diablo Canyon. It was originally scheduled to be shut down in 2025 because it was uneconomical to run, but a 2022 law extended it to 2030. I guess the author of the article missed that bill
But this is all sort of besides the point. No one on the left really cares about uninventing something that makes software devs more productive. They care about job loss. They care about AI slop replacing things they love. Some of this is more vunlerable to "laptop" models than others
I'm not saying you're wrong, but that doesn't really answer how you actually measured this (notoriously difficult!). METR had a pretty clever study design to actually measure perceived vs real task completion speed and got results that suggest at least some claimed speedups are not actually real
For less experienced devs, there's a lot more positive research results (though there are some concerns around AI use interfering with skill acquisition). And it's certainly possible that things have improved enough in the past year that the results with experienced devs are just out of date
How are you measuring that they've gotten faster? METR's study found experienced OSS engineers thought AI was making them faster while actually making them slower on average. Now that is with early 2025 models, but I think it raises real questions about gaps between perceived and real performance
What does that have to do with the harms of generative AI though? Like making software engineers more productive is good. Making them all unemployed would be bad (for them at least). The former is maybe happening (though research results are decidedly mixed), but the latter is not.
Doing a good comparison on that is more work than I'm willing to do for a bluesky argument, but the harms from writing code is mostly around replacing software engineers. Which is both fairly narrow (though important to me personally) and largely not happening even with frontier models yet.
Qwen and gpt-oss were not completely wrong, but hallucinated some details. The latter was pretty verbose in its output giving it more opportunity to hallucinate I guess
For context, these were factual queries that I knew the answer two and are maybe a little obscure (average person would not know the answer), but not so obscure that a good answer wouldn't be found in the top couple of google results. Not an exhaustive test obviously
For the heck of it, I downloaded gpt-oss-120b (unsloth F16) and threw it at a CPU build of llama.cpp and it hallucinated worse than Qwen 3 on my little test query. It's not remotely comparable to ChatGPT or Claude Sonnet (without search) for factual questions.
You can regulate use by legitimate businesses out of existence (maybe not politically viable, but at least in principle). You can't easily do the same for what random people can run on their laptops
I am completely underwhelmed by Qwen3 32B (at least the 4-bit quant I had handy). GPT-OSS-120B is bigger than anything I looked at since I was primarily interested in things that could plausibly run on my GPU
As a software engineer I am somewhat concerned about the impacts of frontier models on my profession as they are now, but I am not seriously worried about losing my job without a few more years of frontier progress. And a lot of these open weight models are downstream of the frontier work
I have tested open weight models fairly recently actually and the ones that can reasonably run on "a laptop" (i.e. not a trillion param one like the full Kimi K2) hallucinate a lot. This is probably good enough to create a lot of problems, but not all of them
This is true for open-weight models of reasonable size and certainly some of what people are upset about will continue with just those, but not all of it. Companies with resources to deploy frontier models at scale is a pretty small list. Same for those that can train new frontier models
Mega Parodius Stage 8 Boss Update !!
Tackling the technically challenging bits head on with the full screen scaling boss in Parodius - Puyon .
More information in the youtube notes there for those interested.
Lots of tricks to get enough performance etc.
Genesis does !!
youtu.be/pqNbWbk8iPA