●Stories
●Firehose
●All
●Popular
●Polls
●Software
●Thought Leadership
Submit
●
Login
●or
●
Sign up
●Topics:
●Devices
●Build
●Entertainment
●Technology
●Open Source
●Science
●YRO
●Follow us:
●RSS
●Facebook
●LinkedIn
●Twitter
●
Youtube
●
Mastodon
●Bluesky
Follow Slashdot stories on Twitter
Forgot your password?
Close
wnewsdaystalestupid
sightfulinterestingmaybe
cflamebaittrollredundantoverrated
vefunnyunderrated
podupeerror
×
180711426
comment
byDamnOregonian
ry 31, 2026 @07:12PM
(#65961768)
Attached to: Radiologists Catch More Aggressive Breast Cancers By Using AI To Help Read Mammograms, Study Finds
In case there was confusion, gpt-oss-120b is a locally-run model, not an API-connected model.
180711400
comment
byDamnOregonian
ry 31, 2026 @07:11PM
(#65961764)
Attached to: Radiologists Catch More Aggressive Breast Cancers By Using AI To Help Read Mammograms, Study Finds
Indeed. Many of the API-connected models have a basic set of tools available (calculator, web search, web fetch, etc.)
180710800
comment
byDamnOregonian
ry 31, 2026 @04:53PM
(#65961532)
Attached to: Radiologists Catch More Aggressive Breast Cancers By Using AI To Help Read Mammograms, Study Finds
This "estimating" or "computing" is not a function of the LLM itself, but an internal tool or API added by the developers of the larger product, that ChatGPT's LLM can leverage as part of its process.
Incorrect.
There's a lot here to unpack. Your test, I think was done in good faith, given some of the misconceptions you've displayed.
Estimation or computation are merely words it selected for whatever process happened within it that came up with the answer. You cannot infer whether or not it used a tool (that's the technical terminology for the LLM requesting an external function all)
Since you're using the ones you're using, you really have no way to know, unfortunately. But I, having a shit-ton of experience with things, know precisely whether or not my LLM is using a tool or not. We'll get back to that later.
Getting good answers from an LLM is a science all in itself. You're treating it like it's a human, but it's not.
You gave it no constraints you wanted on accuracy, or how it should complete the task.
For example, with gpt-oss-120b, I gave it your precise query (no tools attached to this LLM) and it reasoned for a bit, did some math, and came up with "about 13400 sqf". Pretty impressive, actually given the constraints.
Knowing this would lead to this, I then prompted it what I knew would get a good answer:
"What is the volume of a spherical tank 29.5 feet in diameter?
Show your work and calculate it precisely.
"
After quite a bit more reasoning, it came up with this:
Decimal (to the nearest thousandth): V 13,442.024 ft^3
There were no tools attached to this model, and its reasoning (all 8000 tokens worth) were visible.
In the future I'd skip trying to explain to me how you think LLMs work. I've been balls deep in them for too long now.
So when I first started combating the "LLMs cannot do math! trope", I had problems such as you just did with your naive test.
That was actually how my adventure into the inner workings of LLMs started- trying to figure out why they sucked at common tests people put them through.
Ultimately, using much inferior models to the one I just used, I was able to demonstrate that a reasoning LLM, given enough token budget, can calculate just about any math problem.
I challenged a person on this site to give me 10 multiplications of their choosing, and then I multiplied 2 10x10 matrices randomly initialized.
It passed with ease (though in the most expensive case- the matrix- it took about 70,000 tokens to solve it)
No tools. It all comes down to the prompting required to get the LLM to avoid its better judgement and pass your test.
This shouldn't in any way be construed to mean this is a rational way to solve those problems. After all- tool use can solve with an ALU using a microscopic fraction of the time and power. And critically- the LLM knows this.
Some have even been fine-tuned to try to reject challenges like this, and instead give you ways to calculate it yourself with a tool if you won't attach one to them, but using pretty bog-standard jailbreaking prompt techniques, you can get them to do it anyway.
Back to the napkin.
An LLM itself does not deal with tokens. It deals with thousand-dimensional embeddings.
When you're making a chat-bot, the best way to talk to this thing is to use a token embedding network (trained along with the model).
However, what is called a "projector" model can be used for those embeddings as well.
It looks at pixels, and turns them into embeddings.
At the end of the LLM, is the output layer. It converts the final transformed embedding of all the embeddings into a set of probabilities for the next token.
If your goal wasn't to get tokens out of this thing, and use the embedding in some other way on the output- you could as well.
The LLM doesn't model tokens. It models language. Language is represented as embeddings. That's how we get semantics. Tokens themselves don't contain enough data.
This is why an LLM can look at a picture of a math problem written on a napkin, and then indeed solve it for you. With no tools.
180706076
comment
byDamnOregonian
30, 2026 @11:10PM
(#65960408)
Attached to: Apple Reports Best-Ever Quarter For iPhone Sales
Carrier subsidization model is fantastic if you're not looking to leave your carrier any time soon. That's the real trade-off. And that's not a scam at all. It's a trade-off.
180706072
comment
byDamnOregonian
30, 2026 @11:09PM
(#65960406)
Attached to: Apple Reports Best-Ever Quarter For iPhone Sales
You're not wrong, of course.
Inflation causes revenue to go up, and population growth causes sales to go up (assuming adoption stays flat), which also causes revenue to go up.
But that being said- those are all very small amounts.
The 14% cited is not either of those things, nor both of them combined.
180705444
comment
byDamnOregonian
30, 2026 @09:46PM
(#65960340)
Attached to: Radiologists Catch More Aggressive Breast Cancers By Using AI To Help Read Mammograms, Study Finds
Yes, I'm too full of knowledge, lol.
There's no smoke being blown. There are only facts, here.
When you feed an image into a multimodal LLM, and it describes what it sees in a high amount of detail, it's no longer acting like an LLM?
I'm sorry- you're being absurd. Since what happens, literally, is the image is fed into a projector model that converts it into embeddings, and those are- indeed- fed directly into the LLM- the same LLM that can also work with tokens that have been converted into embeddings. You seem to be saying the LLM is merely the token embedding layer, which is bizarre and wrong.
The reason you didn't answer my points, is because they are correct, and you cannot answer them without further highlighting that yours were not.
For the most fun you can have today, download yourself an even medium-level competency LLM, say qwen3-vl-32b-instruct, write a math problem on a napkin, give it to it, and ask it to solve it.
180703642
comment
byDamnOregonian
30, 2026 @04:58PM
(#65959922)
Attached to: Radiologists Catch More Aggressive Breast Cancers By Using AI To Help Read Mammograms, Study Finds
Machine learning can be, and often is, done through algorithms that are not related to AI.
No.
No matter what form of universal approximator you're using, if you're training it mathematically off of data with a machine, it is machine learning, and machine learning is a type of artificial intelligence.
Further, that isn't really relevant, because all ML is done using transformers or variations of them, now. You'd be insane not to. They're infinitely more scalable.
A SQL database is one such piece of software: it is constantly taking statistical samples to "learn" what kind of data is in its tables, to make queries more efficient. No one would say that SQL statistical modeling is "AI".
That is not machine learning. That is heuristics.
If your SQL DB engine is in fact constructing a statistical model that it trains from data, then yes, that is artificial intelligence.
When LLMs read patterns of pixels, they are not really any longer acting as an LLM. Another example is that LLM chatbots often do math. This is not a function of LLM, but is additional functionality that has been integrated into the chatbot.
Blatantly incorrect.
Like utterly, completely fucking blatantly incorrect.
LLMs work with embeddings. They don't really care whether these embeddings are generated from a token embedding layer, or a visual model. After that, the embeddings are processed by the transformer network (the LLM).
Vision is just a type of input. LLMs are input-agnostic. They work with anything that can be turned into an embedding.
Chat bots can also do math, and it is 100% a function of the LLM. They learn by seeing examples of them, and when they see enough, their network generalizes the function- they learn to actually do math, via language.
Seriously, what in the fuck are you talking about?
With that said, there's not a lot of difference between recognizing pixel patterns and token patterns, to a computer, they're all just sequences of numbers. That is why LLMs are a subset of AI, and not the other way around.
LLMs don't work with tokens.
It's tempting to think that they do, because their input and their output is tokens.
However, those are just the input and output layers. The statistical model's latent space has nothing to do with tokens whatsoever.
You actually have no idea what you're talking about, and I can't figure out why you haven't educated yourself on this matter, since you're obviously interested in having an opinion about it.
180703020
comment
byDamnOregonian
30, 2026 @03:22PM
(#65959778)
Attached to: Radiologists Catch More Aggressive Breast Cancers By Using AI To Help Read Mammograms, Study Finds
Sadly, this is spot on.
Massive improvements in productivity are not well handled by capitalism, unless you define well to mean "enrichment of the owners of that productivity".
180703010
comment
byDamnOregonian
30, 2026 @03:20PM
(#65959772)
Attached to: Radiologists Catch More Aggressive Breast Cancers By Using AI To Help Read Mammograms, Study Finds
Shit, you're right.
Technology definitely didn't improve over the last 18 years.
There hasn't at all been a literal revolution in ML involving transformer networks that have gone from other scientific AI applications requiring the computing power of every idle computer in the world to barely solve basic protein folding tasks, to networks finding more protein foldings than ever found before in a matter of months.
Colored skeptical. I think you left some of your stupid showing, though.
180702982
comment
byDamnOregonian
30, 2026 @03:15PM
(#65959766)
Attached to: Radiologists Catch More Aggressive Breast Cancers By Using AI To Help Read Mammograms, Study Finds
It's actually neither LLM nor machine learning.
Wrong. It is ML.
An LLM is a subset of AI in general, that is language-focused and can interpret loose prompts from humans, and do something logical with those prompts.
It's roughly (these days) a synonym for a GPT (merely because no other LM really exists anymore since the performance disparity is astronomical)
But correct- you could call it a subset.
Machine learning does not require AI at all, but may employ AI.
Machine learning is a subset of AI. Particularly these days, where if you're not using a large transformer network, you're simply making an inferior product.
This technology is actually AI, but not LLM. While LLMs deal with patterns of tokens, this AI deals with patterns of pixels. It's been trained to spot cancer on images, and uses that training to help generate a diagnosis or at least a recommendation to a doctor to look more closely. It is, in both the technical sense and in laymen's terms, true AI.
You may be surprised to learn that LLMs can be trained to read "patterns of pixels" as well.
That is not the relevant part.
180702940
comment
byDamnOregonian
30, 2026 @03:12PM
(#65959752)
Attached to: Radiologists Catch More Aggressive Breast Cancers By Using AI To Help Read Mammograms, Study Finds
NNs are universal function approximators.
Classification is but one of the many things they excel at.
180683356
comment
byDamnOregonian
ry 29, 2026 @12:56AM
(#65956002)
Attached to: 430,000-Year-Old Wooden Tools Are the Oldest Ever Found
that's the drift of some modern genetics research.
No, it is not.
Modern genetics research goes away from classifying things so crudely. It would at best be a phenotype.
Also, it's Scotsmen. From the grandchild of a Scotsman.
180683278
comment
byDamnOregonian
ry 29, 2026 @12:45AM
(#65955990)
Attached to: 430,000-Year-Old Wooden Tools Are the Oldest Ever Found
Just to clarify something for you, humans are not descended from apes.
Apes and humans are descended from a common ancestor that diverged from the line that would sprout the Homo genus millions of years prior.
Your logic is sound- that we wielded and manipulated at least small pieces of wood long before stone, but you cannot infer that from the behavior of apes.
180683236
comment
byDamnOregonian
ry 29, 2026 @12:41AM
(#65955984)
Attached to: 430,000-Year-Old Wooden Tools Are the Oldest Ever Found
This post espouses a view of human migration that has so much wrong with it, it's largely considered debunked.
Archaeological evidence nearly universally supports the origin of H. sapiens in Africa, radiating outward, while genetic evidence makes it practically iron-clad, and even explains the archaeological anomalies.
It frankly doesn't even make sense for H. erectus to have evolved along-side humans. We're a clearly-derived organism, and there are even middle organisms that archaeologically and morphologically fill in the gap.
180683170
comment
byDamnOregonian
ry 29, 2026 @12:35AM
(#65955978)
Attached to: Extremophile Molds Are Invading Art Museums
Sorry- Chao... Chaotician.
Life, uhhh, finds a way.
« Newer
Older »
Slashdot Top Deals
●(email not shown publicly)
●
Got a Score:5 Comment
●
Days Read in a Row
●
Comedian
●
Re:Please call it Machine Learning
●
Re:Please call it Machine Learning
●
Re:Please call it Machine Learning
●
Re:Lots of revenue! More sheep!
●
Re:Revenue vs. units
●
interesting (submissions)
●
underrated (comments)
●
informative (comments)
●
insightful (comments)
●
troll (comments)
Slashdot
●
Submit Story
/* Halley */
(Halley's comment.)
●FAQ
●Story Archive
●Hall of Fame
●Advertising
●Terms
●Privacy Statement
●About
●Feedback
●Mobile View
●Blog
Do Not Sell or Share My Personal Information
Copyright © 2026 Slashdot Media. All Rights Reserved.
×
Close
Working...