●Stories
●Firehose
●All
●Popular
●Polls
●Software
●Thought Leadership
Submit
●
Login
●or
●
Sign up
●Topics:
●Devices
●Build
●Entertainment
●Technology
●Open Source
●Science
●YRO
●Follow us:
●RSS
●Facebook
●LinkedIn
●Twitter
●
Youtube
●
Mastodon
●Bluesky
Follow Slashdot stories on Twitter
Forgot your password?
Close
wnewsdaystalestupid
sightfulinterestingmaybe
cflamebaittrollredundantoverrated
vefunnyunderrated
podupeerror
×
180719442
comment
byswillden
2026 @02:53AM
(#65963716)
Attached to: Fourth US Wind Farm Project Blocked By Trump Allowed to Resume Construction
I'm sure they'll welcome those views, just like they welcomed the illegal aliens flown in from Texas and Florida
So, in other words, you think they will welcome those views? The community in Martha's Vineyard did, after all, act to help those kidnap victims.
And they did, in fact, welcome the wind farm view, or at least not oppose it strongly, because the wind farm under construction is, in fact, offshore of Martha's Vineyard.
It's actually quite far offshore, though. I sailed around Martha's Vineyard in August -- we spent the night moored off of Edgartown, then in the morning decided to make our way by going down the eastern side, against the open ocean. The instructor (this was a sailing class, Advanced Coastal Cruising) told us about the wind farm so we looked for it, but couldn't see it. The farm is 15 miles offshore, so you can't see the wind turbines during the day at all, even on a clear day.
The instructor said he thought you could see the mast lights at night, but looking at the nautical chart I think he was wrong, at least from boat-level. The chart says each turbine has a yellow flashing light on it, at 69.9 feet. From boat height (about 6 feet above the water) and applying the "distance to horizon" formula, I get an observable distance of 12.6 nm. The closest we got to the nearest wind turbine was 11.4 nm, according to my chart (we were about 1 nm offshore), so assuming the light was bright enough we could have seen it. But the next-closest turbines (three of them) were right at the 12.6 nm distance, so their lights would have been right on the horizon, if visible at all. All the rest were 15+ nm away from where we are.
People in tall houses on the higher points in the island are high enough to see a 70-foot object from ~22 nm, so they could theoretically see the lights for about a third of the turbines, BUT there's another issue: The charts don't list the luminosity range of the turbine lights. That typically means that they're not very bright, and not visible from more than 3-5 nm.
TL;DR, looking at the charts, I seriously doubt anyone on Martha's Vineyard can see the turbines at all, ever, day or night. You'd have to get a lot further out into the Wild Atlantic to see them, I think, even at night.
180717942
comment
byswillden
2026 @07:59PM
(#65963340)
Attached to: AI Use at Work Has Increased, Gallup Poll Finds
*Those* are the novices I am / we are concerned about never advancing beyond "novice" level.
Indeed. That's a very real concern. We can safely and effectively use LLMs because of our experience and deep understanding of all the layers. But, clearly, novices who come up with LLM assistance will never have to do that. They'll rely on the AI.
I suspect it's more of that than what some are claiming that "software is doomed" and "we're going to lose all experienced coders". Nah...I suspect we're just changing the type of coder that's going to be considered "experienced" and the domain we're going to consider them experienced in.
That's... plausible! And honestly the most hopeful thing I've heard in a while about what the future of the profession looks like. I like your analogy with compilers and other low-level tools that we used to have to know how to double-check.
But my point wasn't about any of that future stuff. My point was that I find Claude to be incredibly useful to me in getting my work done faster and better now.
180717850
comment
byswillden
2026 @07:49PM
(#65963328)
Attached to: AI Use at Work Has Increased, Gallup Poll Finds
"I'm writing a new crypto library"
yeah ok so you can be put on ignore.
Sigh. That's why I clarified that I'm not writing algorithms.
Also, you should consider that I wrote the primary crypto library used on Android, some three billion devices. I'm neither a dilettante nor a clueless noob. I've been a professional crypto security engineer for over 20 years. The reason I'm writing a library with a new API is because I have broad and deep experience with all of the existing libraries and the footguns they provide, and I'm trying a novel approach that I think will reduce user error.
180712980
comment
byswillden
2026 @01:41AM
(#65962052)
Attached to: AI Use at Work Has Increased, Gallup Poll Finds
AI is very good for novices, people who don't know something well.
There is plenty of evidence already that novices using AI will remain novices, rather than develop advanced skills. So yes, as a "novice", you can get to some result quicker by using AI, but the result will be that of a "fool with a tool", and your next work's result won't be better, because you didn't learn anything.
It depends...
So, I'm a very experienced software engineer. Going on 40 years in the business, done all kinds of stuff. But there are just too many tools and too many libraries to know, and you never use the same ones on consecutive projects, that's just reality. What I've found is that telling an LLM to do X using this tool I've never used before and then examining the output (including asking the LLM for explanations, and checking them against the docs) until I understand it is at least an order of magnitude faster than learning it myself. I have no doubt that an expert in that tool would end up questioning some of the choices, because I only end up exploring the parts of it that the LLM chose to use. But that doesn't matter as much as the fact that I have a working solution and I understand how and why it works and am capable of debugging it far quicker than I could learn it myself.
As an example, I'm writing a new crypto library -- not implementing the underlying algorithms, which will actually be executing in secure hardware, just putting a user-friendly API on top and pre-solving a lot of the subtle problems that come up so the users of the API won't have to. Anyway, my implementation is in Rust, for good reasons, but at least some of the clients want C++, so I need to bridge C++ and Rust. After looking up the options and discussing the pros and cons with the LLM, I made a choice (CXX), and told the LLM to write a CXX FFI so the C++ API I wrote can call the very similarly-structured (modulo some C++/Rust differences) Rust API I wrote. The LLM did this in about five minutes, including writing all of the Makefiles to build and link it, and some tests.
It didn't work, of course. But it wouldn't have worked the first time if I'd written it either. So I reviewed the tests the LLM had written, directed it to improve them, then told it to debug the problem and make them pass. It did so, and explained the bugs and the fixes. While the LLM was working, I read the bridge code it had written and looked stuff up in the documentation, occasionally asking questions to another LLM instance. Within 20 minutes it was all working. So, I'm 30 minutes into this FFI task and I already have (a) code that works and (b) tests that prove it. I can also see a bunch of things about he bridge code that I don't like. Some of these things I don't like are actually good, most of them are actually bad, exploring the options (with the LLM's help), tweaking a bit and fiddling with the tests for another hour gets me to something that appears -- with my decades of programming experience but limited knowledge of this tool, to be pretty good.
This is good, because I have some more new tools to learn/use. Today. 90 minutes got me a good-enough-for-now FFI solution (for a pretty large and complex API surface) that's probably not too far from actually being good.
Next up, I need a persistent key/value store with particular performance characteristics, high reliability, a solid track record, a no_std (no standard library) Rust API, and that can run on QNX 7.1. Turns out there is no such beast, but lmdb is pretty close. It has all except the no_std Rust API. But there are some Rust crates that offer thin wrappers around lmdb's C API. lmdb-master-sys, part of heed, looks like the best-maintained and most widely-used of these. So I asked the LLM to take a look at what changes might be necessary to make it work as no_std. The LLM identifies a tiny set of cases where the standard library is used, and they're all trivially-replaceable. So, I make the changes while I ask the LLM to write some unit tests. It works, first time. I send a PR to the maintainer of the library. Total time, about 20 minutes. It would have taken me at least three times that long to figure out how to use the lmdb API to write the tests.
Next up... I'll stop here, but you get the idea. If you need to work with a lot of tools you don't know well -- and at least for me the speed at which I need to jump between tools pretty much guarantees that I'll never know any of them really well -- but you have enough experience and deep-enough expertise to quickly see what an implementation is doing and to understand why and how, LLMs will massively accelerate your work.
They also give you time to post on slashdot, while you wait for the LLM to do stuff. Er, I mean they give you time to catch up on email, do code reviews, watch company training videos, read documentation, etc.
Others' mileage will vary, of course, but I find that using an AI tool significantly increases my overall velocity (probably 1.5X overall) while simultaneously significantly increasing the quality of my output. The quality increase isn't because the LLM is better than me. It's definitely not. But it's way, way faster, especially at doing the grungy work that I tend not to do as thoroughly as I should. For example, writing really thorough commit messages. I totally delegate commit message writing to the LLM now. I review and sometimes tweak, but not often.
And its speed makes some things possible that otherwise weren't. For example, often I'll see some aspect of my code that could make it 10% better with a large refactor and I have to weight the benefit of the small improvement against the time sink of the large refactor. No longer. I tell the LLM to do it. Sometimes I tell three instances of the LLM to do three different things (in different checkouts of the code), then decide which, if any, I want to keep (after significant tightening and improvement, some manual, some by giving detailed directions to the LLM).
The result is that while I might do one of ten of those 10% improvements without an LLM, netting an overall 10% improvement, I'll probably do half of them with the LLM (the other half I'll realize weren't actually good ideas, for reasons that weren't obvious until making the attempt -- or seeing the LLM make the attempt). Net improvement, ~60%.
And as for debugging... wow. Claude is seriously good at debugging. It doesn't always get it right the first time, but between the speed at which it can examine the situation, form a hypothesis, test to invalidate the hypothesis and move on to the next hypothesis and the quality of its hypotheses, it may be two orders of magnitude faster than me. It's especially good if you give it a stack trace to parse. Repeatedly it's found the root cause of fairly deep, grungy bugs in less than five seconds, including the time it took to generate a detailed and precisely-correct explanation of the problem. It then takes me a few minutes to parse and understand the explanation, then validate it against the code and (if necessary) relevant documentation. Claude isn't always right in its analyses, of course. But it's very good.
Anyway, for me LLM assistance for development significantly improves both my productivity and the quality of my output. YMMV.
180712776
comment
byswillden
2026 @12:47AM
(#65962018)
Attached to: The Bill Gates-Epstein Bombshell - and What Most People Get Wrong
Incompetence isn’t baked in to authoritarianism - it’s left entirely up to chance.
Chance, yes, but there are two factors you're failing to consider.
The first is that competent people generally don't want to work for narcissistic authoritarians, both because it sucks and because they know their own value and want to be hired for that, rather than because they're good at being sycophantic.
The second is that competent people are rare. If you're choosing at random, the odds are extremely high that you'll get an incompetent one. This is exacerbated by the first factor, since competent people are likely to remove themselves from consideration.
So, yes, incompetence is baked into authoritarianism. At least, competence at anything other than brutality and corruption. The people who are really good at those things actually seek out authoritarians, because non-authoritarian regimes won't let them get away with doing what they like, and are good at. If you're a really nasty son of a bitch you can trade undying loyalty and nauseating obsequiousness for a pass to exercise your nastiness. For people like that, it's a good trade. Also for the authoritarian because authoritarians need nasty people to inspire fear in their subjects.
Of course, even with all of these factors working against them authoritarians typically manage to find a few competent people and keep them around. But they're always a tiny minority. Incompetence is baked in.
180712734
comment
byswillden
2026 @12:39AM
(#65962008)
Attached to: The Bill Gates-Epstein Bombshell - and What Most People Get Wrong
Clinton's the one who will take the fall.
Bill Clinton seems remarkably confident that there's nothing in the files that will implicate him. Maybe it's just a show, but, if so, it's a compelling one.
180712722
comment
byswillden
2026 @12:36AM
(#65962006)
Attached to: Author of Systemd Quits Microsoft To Prove Linux Can Be Trusted
He was probably being told daily to shove copilot in system-d somehow.
Not ruin linux on purpose or any sort of "make windows win" kind of deal, just get copilot inside linux because the copilot SOMEONE has to use the thing.
Given what he's decided to do, it's more likely he wanted to implement his trust verification ideas in Windows and got shut down, so he decided to go do it with Linux.
As someone who spent much of the last decade thinking and working on topics related to system integrity and remote proof thereof, I'm interested to see what his ideas are and if they're actually novel, or at least have some innovative twist.
180712712
comment
byswillden
2026 @12:33AM
(#65962004)
Attached to: Author of Systemd Quits Microsoft To Prove Linux Can Be Trusted
No, probably not. Most people have made their peace with systemd and I think it'll persist.
Definitely. It's the standard now in the major Linux distros and it's not going away. It works and they're happy with it.
I don't care, myself, but I only run a handful of machines. OTOH, the people I know who administer thousands of machines like systemd just fine, as do folks like the Debian leadership.
180712682
comment
byswillden
2026 @12:27AM
(#65962002)
Attached to: Author of Systemd Quits Microsoft To Prove Linux Can Be Trusted
now I'm elsewhere with saner housing prices.
The trick is to live somewhere with sane housing prices while working for a Silicon Valley company, for SV pay -- or at least close to it. Yeah, they'll ding you a little for being remote, because they can, so you'll only be making 80% of what you would living in SV... but you'll still be making 3-5X what local companies will pay.
180712304
comment
byTodd Knarr
31, 2026 @10:42PM
(#65961946)
Attached to: Author of Systemd Quits Microsoft To Prove Linux Can Be Trusted
Cryptographic verification of the system components and all the other stuff is important, it's a way to detect and limit the damage once a system is compromised and in the process of being infected by malware. That, though, happens far too late. For it to work you have to assume that the parts of the system that enforce verification don't have exploitable bugs in them, and we've already seen that's never the case. Especially when a single key held by a single entity is the trust root for a large number of systems.
We need changes in application and user behavior that make phishing attempts more difficult, make it easier to detect that an email or document wasn't sent by the entity it purports to have been sent by. We need changes that reduce the available attack surface of the system, especially undocumented attack surfaces (eg. systemd's invisible SSH server), and make it more obvious to users when something's active that they didn't ask to be active. We need applications that don't assume they have full access to the entire system and won't work without it.
180709962
comment
byfuzzyfuzzyfungus
nuary 31, 2026 @01:56PM
(#65961228)
Attached to: White House Scraps 'Burdensome' Software Security Rules
I don't doubt that the previous requirements were effectively impossible for nontrivial portions of the industry and their customers; though, given the wall-to-wall dumpster fire that is IT and IT security; I can only see the attempt to treat that as evidence that the regulations were unrealistic and unduly burdensome as either myopic or deeply cynical.
Commercial software and both commercial and institutional IT operations are much more an example of the fact that you can absolutely run on dangerous and unsustainable shortcuts so long as there are no real consequences for failure than it is a case of a competent and successful industry at risk of being stifled by burdensome regulation.
180704596
comment
byfuzzyfuzzyfungus
ary 30, 2026 @07:40PM
(#65960198)
Attached to: White House Scraps 'Burdensome' Software Security Rules
The reasoning is honestly just baffling. Apparently the old requirements "diverted agencies from developing tailored assurance requirements for software and neglected to account for threats posed by insecure hardware." by requiring that people keep track of what software they were actually using.
Aside from the...curious...idea that knowing what your attack surface looks like is a diversion from developing assurance requirements; the claim that the old policy about SBOMs is being revoked for not focusing on insecure hardware is odd both on the obvious point that basically anything with a sensible scope only focuses on certain issues and leaves other issues to be handled by other things and the only slightly less obvious issue that most 'insecure hardware', unless you've qualified for a really classy covert implant or have high sensitivity TEMPEST issues or something, is not actually hardware problems; but firmware problems; which are just software problems that aren't as visible; exactly the sort of thing that SBOMs help you keep an eye on.
Not like anyone expected better; but this is exceptionally poor work.
180702906
comment
byjd
:11PM
(#65959746)
Attached to: One-Third of US Video Game Industry Workers Were Laid Off Over the Last Two Years, GDC Study Reveals
Gondor lit the beacons before it was under siege, because to do so after is far, far too late.
For the IT industry to start speculating AFTER it has lost a third of its workforce is to start debating whether to light the beacons only after a third of the city is taken.
This is a crisis that has been expected for a very long time. Long enough for you to have experience in fighting the bean counters. Sorry, but this is a mess of your own making. In more ways than one.
1. AI is good at a few basic tasks, but it is not good at being innovative or fresh. Nor is it ever going to be capable of being so, because you can't have the future in the training set. So regurgitating a few simple themes repeatedly was never going to be in the interests of humans, only in the interests of accountants (most of whom seem to have used the daleks and cybermen as a training manual on conduct) and short-term profits. Accountants don't care if a company goes belly-up, they work many accounts, so short-term profits (even if it causes medium-term collapse) are all that matter.
2. AI cannot write decent code. How could it - it was trained on Stack Overflow and abandoned github projects. But this only matters if the humans bother themselves to write reliable code. You can replace one bug-ridden pile of carp with another without users caring too much.
3. AI cannot write tightly-optimised code. But, then, I doubt most humans ever bothered to learn that skill, when they could simply instruct the user to install more RAM and a beefier CPU.
180701728
comment
byTodd Knarr
2026 @01:19PM
(#65959462)
Attached to: Microsoft is Experimenting With a Top Menu Bar for Windows 11
As part of Win11, that sounds an awful lot like the task bar. Why not just allow the task bar to be positioned at the top as well as the bottom? If it's application-specific, doesn't that belong in the application window as part of it's menu bar or status bar or as a side bar?
180700944
comment
byjd
:04AM
(#65959078)
Attached to: Former Google Engineer Found Guilty of Stealing AI Secrets For Chinese Firms
Until now, I had never realised that, with something as sophisticated as the human brain, it was possible to achieve an IQ that was not only on the imaginary number line but also a negative value.
« Newer
Older »
Slashdot Top Deals
●(email not shown publicly)
●
Got a Score:5 Comment
●
Member of the 10010 Digit (binary) UID Club
●
Days Read in a Row
●
Union overreach
●
Re:Cuban Missile Crisis
●
Re:New embassy
●
Re:Let's be clear
●
Re:This is rocket science
●
Maundering
●
MSRedfox
●
superwiz
●
Nexion
●
Chas
●
king*six
●
slashdot (submissions)
●
funny (submissions)
●
interesting (submissions)
●
binspam (submissions)
●
constitution (submissions)
●
Germany caps power prices to save industry
●
Greg Stafford, a fundamental personage of the RPG industry passes
●
DTV is coming...I'm not ready.
Slashdot
●
Submit Story
BYTE editors are people who separate the wheat from the chaff, and then
carefully print the chaff.
●FAQ
●Story Archive
●Hall of Fame
●Advertising
●Terms
●Privacy Statement
●About
●Feedback
●Mobile View
●Blog
Do Not Sell or Share My Personal Information
Copyright © 2026 Slashdot Media. All Rights Reserved.
×
Close
Working...