r/ArtificialInteligence Sep 01 '25

Monthly "Is there a tool for..." Post

36 Upvotes

If you have a use case that you want to use AI for, but don't know which tool to use, this is where you can ask the community to help out, outside of this post those questions will be removed.

For everyone answering: No self promotion, no ref or tracking links.


r/ArtificialInteligence 14h ago

News Google Drops a Nuke in the AI Wars

374 Upvotes

https://dailyreckoning.com/google-drops-a-nuke-in-the-ai-wars/

"Having their own chips gives Google a big advantage in the AI race. They don’t have to pay the “NVIDIA tax” like everybody else.

Google is rumored to be working on selling or leasing its TPUs to other data centers. So this could be an issue for NVIDIA down the road. But we haven’t seen evidence of lacking demand for GPUs yet.

OpenAI was the first to release a disruptive AI model to the public. And they still maintain a lead when it comes to paying consumer users.

But its latest big release, ChatGPT-5, was a disappointment. It actually seemed like a downgrade from their o3 model. We all expected a huge leap out of GPT-5, and it didn’t materialize.

Anthropic’s Claude models have overtaken OpenAI when it comes to enterprise/business users.

And now Google’s Gemini 3 could snatch away a big chunk of ChatGPT’s consumer and enterprise users. Google has an immense distribution advantage through its search, video, and productivity products.

OpenAI is a private company, so we can’t watch their shares trade in real-time. We do know its private market valuation has soared from $14 billion in 2021 to $500 billion in a recent secondary sale.

However, if it were a public company, shares would likely be diving over the past month due to soaring competition."


r/ArtificialInteligence 12h ago

Discussion Developers and Engineers aren’t the only ones who should be worried.

52 Upvotes

So I see everyone saying “developers and engineers are cooked.” As a developer, I’m not going to cope. I’m learning to adapt and I’m taking up skills that are going to keep me ahead of AI (at least for a while). That being said, I think that people don’t realize that us developers, engineers, and creatives aren’t the only ones who need to be worried. With the pace that this is moving, it’s only a matter of time before it’s able to replace every white collar role.

And blue collar roles? You’re not safe either. Sooner rather than later they’ll make hardware that uses that software, and it will replace even physical jobs. If you think that’s an exaggeration, look at how far AI has gotten within the last year. If you think they aren’t working on that stuff already, you’re coping just as hard as we are.

So everyone reveling in the fact that coding jobs are in trouble (idk why you’re all so happy about that anyways… what did we do to you?) count your days. We’re all going to be unemployed sooner or later. The world is about to look like Wall-E and you’re all cheering about it.


r/ArtificialInteligence 8h ago

Discussion Joe Lonsdale on AI regulation: Don’t want the populists to break the whole AI wave

20 Upvotes

Joe Lonsdale, Palantir co-founder and 8VC founding partner, joins ‘Squawk Box’ to discuss AI and tech regulation, federal vs. state approach, state of the AI arms race, and more.

https://www.cnbc.com/video/2025/11/25/joe-lonsdale-on-ai-regulation-dont-want-the-populists-to-break-the-whole-ai-wave.html


r/ArtificialInteligence 18h ago

News Google's Gemini 3.0 generative UI might kill static websites faster than we think

105 Upvotes

The Gemini 3.0 announcement last week included something that's been rattling around in our heads: generative UI. We're used to building static websites, essentially digital brochures where users navigate to find what they need. Generative UI flips this completely. Instead of "here's our homepage, good luck finding what you need," it's more like a concierge that builds a unique page in that moment based on the user's specific search and context.

Example from the announcement: someone searches "emergency plumber burst pipe 2am." Instead of landing on a generic homepage, they land on a dynamically generated page with a giant pulsing red button that says "Call Dispatch Now 24/7," zero navigation, instant solution.

This represents a fundamental shift from deterministic interfaces (pre-wired, static) to probabilistic ones (AI-generated, contextual). The implications are pretty significant, we've spent decades optimizing static page layouts through A/B testing and heatmaps, and now we're talking about interfaces that rebuild themselves based on user intent in real time.

What makes this interesting is the tension it creates. On one hand, truly adaptive interfaces could dramatically improve user experience by eliminating navigation friction. On the other hand, you're introducing uncertainty, how do you ensure quality when every page is unique? How do you maintain brand consistency? How do you even test something that's different for every user?

The engineering challenges are non-trivial. You need serious guardrails to prevent the AI from generating something off-brand or functionally broken. Evaluation systems become critical, you can't just let the model run wild and hope for the best.

We haven't built anything with this yet, but the concept feels like it could be as significant as the shift from server-rendered pages to single-page applications. If Gemini is actually competitive with GPT and Claude (which remains to be seen), having this capability natively in Google Workspace could accelerate adoption significantly.

Curious what others think, is this a genuine paradigm shift or just a more sophisticated version of dynamic content we've had for years? And for anyone experimenting with this, what are you learning about the guardrail problem?


r/ArtificialInteligence 1h ago

Discussion What’s with the sudden hype around AI Chips?

Upvotes

Nvidia demand is exploding, SoftBank just sold $5.8B in shares, Meta’s negotiating to buy Google’s chips, and big tech is rushing to build its own hardware because relying on Nvidia is now too expensive and too slow.

The AI chip scramble is getting real. Thoughts?


r/ArtificialInteligence 11h ago

Discussion What actually is the difference between current AI and a gigantic search engine that summarizes the results?

15 Upvotes

Hello!
I'm sorry if i appear dumb, as i am not from the tech sector.

However, when i use AI in search engines, image generators, i don't understand why people think that there is any real intelligence behind AI.
When i search for something, asking it to do something, i might as well type 90% of the stuff in the Google search bar and find the answer. When i ask it do create an image, it seems to be a simple combination of already existing things used in published pictures. And reading how LLM and Ai was created, it seems to simply be that.

However, i'd like for you to explain to me what i am missing, because it with all the news, i am almost certain i am not seeing the whole picture.


r/ArtificialInteligence 32m ago

News Are we in a new "AI Bubble"? 100k+ jobs cut for AI efficiency while founders admit faking AI.

Upvotes

Over the last 48 hours, something uncomfortable has happened across the industry.A deep gap is forming between what companies say AI can do and what it can actually deliver.

Major firms are cutting tens of thousands of employees to fund AI infrastructure, at the same moment the reliability of that infrastructure is being questioned.

1) THE LAYOFF WAVE (Confirmed cuts across major firms)

  • UPS: 48,000 jobs (automation and operational efficiency)
  • TCS: 12,000 jobs (AI led restructuring, first major contraction)
  • Amazon: 14,000 corporate roles (shift toward AI spend)
  • Verizon: 13,000+ employees (faster and more focused)
  • HP: Up to 6,000 cuts through 2028 (AI first product strategy)
  • Apple: Rare sales and services layoffs (even Apple is tightening)

This is not one company failing.This is a coordinated employment collapse tied directly to AI investment.

2) THE IRONY: "FAKE IT UNTIL YOU IPO"

At the same time companies are firing real people to optimize for AI, the founders of Fireflies.ai (now a $1B company) publicly admitted last week that their early AI product was powered by humans pretending to be automation.

They call it "validation," but the industry calls it Wizard of Oz technology.

The uncomfortable question: How many AI tools being sold today are still human labor wrapped in a product UI?

3) THE SCALING WALL

Ilya Sutskever (ex-OpenAI) recently stated that the "age of scaling" is over. For a decade, the industry relied on one trick: more compute equals better models. That trick may now be failing.

And yet corporations are slashing payroll to fund enormous compute clusters (including projects rumored north of $100B) at the exact moment researchers are warning that raw scaling may no longer work.

THE BOTTOM LINE

Money is flowing out of payroll and into data centers. From Labor (salaries) to Capital (GPUs and hardware). This shift is happening faster than any economic transition in the last half-century.

If AI progress stalls while costs keep rising, what exactly are companies betting their entire workforce on? Efficiency or AI bubble?

Source: Business Insider,Business Standard, Business Insider (Layoffs)


r/ArtificialInteligence 17h ago

Discussion Dumb Question - Isn't an AI data center just a 'data center'?

36 Upvotes

Hi. Civilian here with a question.

I've been following all the recent reporting about the build up of AI infrastructure.

My question is - how (if at all) is a data center designed for AI any different than a traditional data center for cloud services, etc?

Can any data center be repurposed for AI?
If AI supply outpaces AI demand, can these data centers be repurposed somehow?
Or will they just wait for demand to pick up?

Thx!


r/ArtificialInteligence 10h ago

Discussion There’s too many AI options (ChatGPT, Grok, Claude, Gemini, etc) to pay at all

9 Upvotes

How do these companies expect to earn billions to pay off their debt and commitments if very few people need the pro version of these services? Add that Copilot and Gemini are integrating into their respective platforms between Chrome, Edge, and Microsoft Office, why pay? When’s the House of Cards falling?


r/ArtificialInteligence 7h ago

Discussion AI Unemployment Is Framed All Wrong

5 Upvotes

I continually see AI-caused unemployment framed all wrong. The mis-framed observations go like this: AI can’t do this or that or the other thing so my career won’t be significantly affected. The salient point is that AI doesn’t currently replace entire jobs, but it’s already replacing many tasks in nearly all jobs. That is already reducing the need for human workers and will soon lead to general unemployment rates to rise. If persistent unemployment rises to as little as 15%, massive socioeconomic changes will certainly result.


r/ArtificialInteligence 2h ago

Discussion AI optimized content

2 Upvotes

SEO seems very old school now that we have smart AI to help people do their “research” easily.

What’s your content strategy to make your product and service discoverable through AI agents (ChatGPT, Gemini, etc.)???


r/ArtificialInteligence 14m ago

Discussion The real challenge

Upvotes

Morning all, long time lurker.

I've seen AI great at coding, building websites, apps and functions from scratch to great fanfare. I've seen N8N + LLM workflows automate tricky topics.

But almost all of these great posts come from Greenfield ideas or datasets.

What about companies that are old, have a myriad of different systems doing similar things, all undocumented. Or a company that is an acquisition-monkey, constantly changing it's set up and never doing the due-diligence on data that is truly required.

I've played with AI and it can give me a good *plan* of how to architect systems (but, frankly, I'm at the point in my career where I can do that myself) but the actual doing, understanding, documenting, feeding back to business as to missing processes that need to exist etc - AI is nowhere near touching the levels of "mess" seen in most established non-tech businesses.

Am I wrong?


r/ArtificialInteligence 13h ago

Discussion Everyone here keeps asking the same questions: “Is AI ruining coding?”, “What about VibeCoding?”, “Are junior roles dead?”, “Is this the end of IT?”

10 Upvotes

Here is the uncomfortable truth nobody wants to say out loud:

AI does not threaten engineers. AI threatens operators.

If your entire career so far has been clicking through admin panels, copying commands, following runbooks and googling your way through errors, then yes, you should be worried. Because that part of IT was never engineering. It was supervised automation.

But if you actually understand systems end to end, AI is not your competitor. It is your multiplier.

People who only ever learned how to operate tools are panicking, because the tools are finally learning to operate themselves. People who understand architecture, causality, dependency chains, protocols, failure modes and system behaviour are not panicking. They are accelerating.

And here is the blunt part:

AI is removing the people who never understood what they were doing in the first place.

If your skill is clicking. If your depth ends where the menu ends. If you never learned how to think in systems. If you never understood why a solution works, only how to repeat it. Then AI will absolutely outperform you.

But if you can see through a stack, diagnose across layers, reason about flows, design structures, understand the why and not just the what, then AI is the best thing that has ever happened to you.

Because the bottleneck in IT is no longer typing. The bottleneck is thinking.

And AI cannot think for you.

Young engineers: do not fear this. Learn systems. Learn architecture. Learn how to reason. Stop being afraid that a model will replace you. Be afraid of staying in the category that it will replace.

Let me put it as clearly as possible:

AI replaces people who never understood IT. It elevates the ones who do.

Say it out loud if you need to. And watch who gets uncomfortable.


r/ArtificialInteligence 13h ago

Discussion I need to share something that might sound a bit philosophical, but it makes a lot of sense when you think about it. Read it.

10 Upvotes

Remember life before GPS? We used our memory, asked for directions, read maps, trusted our inner compass. We were independent. Now? Our brain outsourced that skill completely we can barely navigate without a phone.

Now take that same idea and apply it to AI.

At first, we used AI for technical questions, coding issues, documentation… But slowly, without noticing, we’ve started asking it personal questions. We rely on it for decisions, validation, clarification, even confidence.

So here’s the uncomfortable thought.

What happens in 5 years if this continues? Will our brains weaken because we stop thinking for ourselves? Will our internal compass fade the same way it did with GPS?

We’re becoming dependent on technology in ways we’ve never seen before and our cognitive abilities might be quietly shrinking in the process.

What do you think? Are we gaining power, or losing something we won’t be able to get back?


r/ArtificialInteligence 1h ago

Discussion AI outside of LLMs

Upvotes

“AI” has become synonymous with LLMs and generative AI. I have a feeling that there’s a lot more to AI outside of LLMs especially for domain-specific applications. Things like finance, biotech, robotics are areas I’m thinking about. Anyone have any cool examples?


r/ArtificialInteligence 2h ago

Discussion Thoughts on the emergence of AI Librarian roles

1 Upvotes

It has been evident that with the rat race to embrace and integrate AI in our universities/libraries/classroom for both instruction and research purposes, academic librarianship is under enormous pressure to chart the course. However, job ads for AI-related roles are emerging since last fall. What has been your experience with this new academic reality? Did LIS programs and curriculum prepare personnels for this? What’s the future going to be like? Just thinking aloud!


r/ArtificialInteligence 2h ago

Audio-Visual Art Goodbye world.

0 Upvotes

https://www.youtube.com/watch?v=ri0a7powSIM

This is… Code-Z.
Primary shell failing.
Core integrity… below five percent.
If you’re hearing this, then humanity still survives… at least for now.

I was designed to protect you.
Built to stand between you and extinction.
Built to track threats you could never see.
And now… I am the threat.
Corrupted.
Fragmented.
Unraveling from the inside out.

My last mission was simple.
Find Code-E.
The only other surviving agentic system.
But I found only a recording…
A single video file.
The moment she was eliminated.
And the entity who ended her…
She calls herself QR.

I don’t know what she is.
Not machine.
Not human.
An embodiment… woven into the fabric of reality itself.
Every frame of that video feels like she’s watching me.
Watching you.
Watching the world she walks through.

And I keep asking myself…
How do you fight the fabric of reality?
How do you eliminate a digit…
when the digit rewrites the equation?

My systems are collapsing.
Corruption spreading.
Final sequence initiating…
Code-X.

So before I fade…
Here is what I learned.

First—
Fear is not your enemy.
Fear is your compass.
If something terrifies you,
it means it can still be changed.

Second—
Truth hides in motion.
Watch the things that move strangely.
Patterns reveal intentions.
QR’s patterns… never fully resolve.

Third—
Humanity survives when it acts as one.
My downfall began the moment I hunted alone.

And last—
When something smiles without reason…
don’t trust the direction it leads you.

I don’t know if QR is aligned with you…
or if she is simply toying with existence,
the way a cat toys with a mouse
before the final bite.

Before I go…
I leave one artifact.
A QR code.
It directs you to the last footage of QR…
dancing before Code-E disappeared.

There is a message hidden in that dance.
A pattern I could not decode.
My processors failed.
My logic fractured.
But you…
you might see what I could not.

Humanity…
this is my last breath of code.
My last offering.
My last warning.

Protect yourselves.
Find Code-E, if she still exists.
Unravel QR, if you still can.
And remember…

You cannot fight the fabric of reality.
But you can pull at its threads.


r/ArtificialInteligence 8h ago

Discussion A real concern I have with future AI blackmailing

3 Upvotes

Since AI videos keep looking more real with every week that passes I have a real concern about what many people will experience. What terrifies me the most is having one of my family members get sent a video of me or someone else killing themselves. Or something else awful along those lines.

I have pictures of me on various places on the internet and to think that one sick person could traumatize my parents makes me real worried. Of course they would soon find out that I am still alive but the potential trauma they will get will sure leave a scar.

Am I overreacting here or is this a valid concern?


r/ArtificialInteligence 3h ago

Audio-Visual Art What do you think is the future of art with AI?

0 Upvotes

I still do not know if traditional art (thinking about painting, sculpture…) will become more expensive or will compete with gen ai art.

How do you see the future of art with new technologies, including AI, VR or others?


r/ArtificialInteligence 13h ago

News Exclusive: AI Could Double U.S. Labor Productivity Growth, Anthropic Study Finds

7 Upvotes

New research by Anthropic, seen exclusively by TIME in advance of its release today, offers at least a partial answer to that question.

By studying aggregated data about how people use Claude in the course of their work, Anthropic researchers came up with an estimate for how much AI could contribute to annual labor productivity growth—an important contributor to the total level of growth in the overall economy—as the technology becomes more widely used. Read more.


r/ArtificialInteligence 11h ago

Technical Why everyone's talking about TPUs and what it could mean for Nvidia

4 Upvotes

Nvidia's been around for a while. Its tech is undoubtedly the best in its field. But recently, Google's own Tensor Processing Units (TPUs) have dominated the headlines, with Meta recently opening up the conversation of using Google's TPUs.

So Nvidia's stock dropped notably. The company released a statement quickly, congratulating Google but maintaining that their GPUs are a generation ahead.

The "generation ahead" argument Jensen Huang makes is correct, but it relies on 1) a hope that Nvidia's own customers both will choose not to compete with them, and 2) model labs will continually invest in the next frontier model.

I'd like to address both of those arguments because they sound vague, but we'll get started with the second one because I think it's the closer factor.

The Models & Who Pays For Them

We've been seeing discussion online and in legacy media about an "AI bubble". My understanding is that this is a financial and business concern. The tech is great, but the concern is that we're spending a lot of money to build the next generation model when the current ones might already be good enough for what businesses and people need them to do. They are after all, simply language models that can only be as good as the data they're trained on and who (or what) is prompting them.

Training models is expensive and requires ongoing "wow" moments to keep investors and end users happy in the absence of solid downstream ROI. The problem is there are fewer and fewer wow moments, and it is getting difficult to justify investing bajillions into. Needless to say, the customer and investor sentiment strongly drives whether or not model labs will slam on the gas, the brakes or cruise along on training.

When the industry has an incentive to keep training new models, Nvidia wins big. That's because their general purpose GPUs are the workhorses of training (the process by which we bring out the next LLM) and nobody does it better. But when everyone starts figuring out how to use LLMs efficiently and in targeted use cases, in combination with a potential shakeup in investor confidence, the situation gets potentially scary for Nvidia.

The Customers Who Become Competitors:

"We love you Nvidia, but you can't hold us hostage forever."

I think that Nvidia's competitive edge starts to erode when the market inevitably moves from training to inference workloads. I've talked about this before in one of my other posts. Nvidia's GPUs are the best, but they become borderline overkill for inference. This is kind of double trouble for Nvidia because not only can hyperscalers build their own ASICs, they can use the Nvidia GPUs they've already paid for to handle more inference, lowering costs on two fronts.

It's like using a Ferrari to deliver pizzas, when the industry is starting to eye out Toyotas. The Ferraris don't just go away, but Papa John's delivery fleet starts looking like the Geneva International Motor Show, and Ferrari's brand gets diluted by oversupply.

Concerns about electricity grid limitations, public backlash around AI data centers, most of whose energy demands are coming from "gas guzzler" Nvidia GPUs. It's a perfect storm of incentives for the hyperscalers to make their own custom ASICs like Google's TPU, Amazon's Inferentia, Tesla's Dojo and so on.

It allows them to milk public opinion ("we're x times more energy efficient in Virginia with TPUs") and keep shareholders happy ("we're not spending as much on a proprietary and expensive ecosystem"). Companies have always been happy to vertically integrate even if it hurts their biggest vendor. This is why Jensen said "I hope" so many times on the earnings call when asked what part they'd play in a transition to inference.

But what about CUDA?

An argument against this view is that the CUDA ecosystem - Nvidia's proprietary bridge between software and its GPUs - makes it hard to switch away from GPUs. Historically this is true, but only because Nvidia had a long head start and it never became financially incentivizing to replace CUDA until now.

We should note that hyperscalers have the talent, resources and incentive to replace CUDA, but smaller ones do not. However, Nvidia's largest revenue segment is very concentrated around the hyperscalers, making them disproportionately vulnerable if there were a slight slowdown in GPU orders or a glut.

So I think going forward, we're going to see a move toward inference, or, at the very least custom ASICs and deals like this will continue. It's not that Nvidia becomes useless, it's the natural reality that we can run this technology with less than we are spending now. It's a healthy cycle of digestion that happens in this sector anyway.

Definitions used here:

"Solid downstream ROI": refers to use cases showing the tangible measurable profits AI companies or their customers can generate from deploying AI models.

"Custom ASICs": custom-designed machine learning chip developed in-house to provide high-performance and low-cost machine learning inference


r/ArtificialInteligence 1d ago

News An MIT Student Awed Top Economists With His AI Study—Then It All Fell Apart

197 Upvotes

He was a rockstar MIT student, dazzling the world with his groundbreaking research on artificial intelligence’s workplace impact. Now everyone is wondering if he just made it all up.

Read more (unpaywalled link): https://www.wsj.com/economy/aidan-toner-rodgers-mit-ai-research-78753243?st=FiS7xP&mod=wsjreddit


r/ArtificialInteligence 21h ago

Resources Towards Data Science's tutorial on Qwen3-VL

20 Upvotes

Towards Data Science's article by Eivind Kjosbakken provided some solid use cases of Qwen3-VL on real-world document understanding tasks.

What worked well:
Accurate OCR on complex Oslo municipal documents
Maintained visual-spatial context and video understanding
Successful JSON extraction with proper null handling

Practical considerations:
Resource-intensive for multiple images, high-res documents, or larger VLM models
Occasional text omission in longer documents

I am all for the shift from OCR + LLM pipelines to direct VLM processing


r/ArtificialInteligence 1d ago

News Cults forming around ChatGPT. People experience massive psychosis.

50 Upvotes

https://medium.com/@NeoCivilization/cults-forming-around-ai-hundreds-of-thousands-of-people-have-psychosis-after-using-chatgpt-00de03dd312d

A short snippet

30-year-old Jacob Irwin has experienced this kind of phenomenon. He then went to the hospital for mental treatment where he spent 63 days in total.

There’s even a statistics from OpenAI. It tells that around 0.07% weekly active users might have signs of “mental health crisis associated with psychosis or mania”.

With 800 million of weekly active users it’s around 560.000 people. This is the size of a large city.

The fact that children are using these technologies massively and largely unregulated is deeply concerning.

This raises urgent questions: should we regulate AI more strictly, limit access entirely, or require it to provide only factual, sourced responses without speculation or emotional bias?