Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

Sign Up Now!

The wtf is AI up to thread

According to my ai this is an average day in the life in 2032



In 2032, your daily life will likely feel like a more seamless, "frictionless" version of today. While we won't have flying cars just yet, the way you move, work, and interact with your home will be governed by more intuitive AI and a "20-minute neighborhood" philosophy.
Here is a glimpse into a Tuesday in 2032:
🌅 7:30 AM: The Proactive Morning
Your alarm doesn't just ring; your Smart Home OS has been monitoring your sleep cycles. It nudges you awake when you're in a light sleep phase.
* Health Integration: Your bathroom mirror displays a quick "Health Score" based on vitals tracked by your wearable or mattress sensors.
* Personalized Nutrition: Your fridge—which tracks its own inventory—has already alerted you that your milk is low and has added it to an automated delivery drone queue for later that morning.
🚆 9:00 AM: The "Multimodal" Commute
If you work in a physical office, the commute is no longer a battle with a steering wheel.
* The Metro Tunnel: If you're in Melbourne, you’re using the now-mature Metro Tunnel system. Trains run every few minutes, and you pay with a digital wallet or a biometric scan—physical Myki cards are a relic of the past.
* Micro-Mobility: For the "last mile," you likely hop on a shared electric scooter or a docked e-bike that is better integrated into dedicated, protected lanes than they were in 2026.
💻 11:00 AM: The "Hybrid 2.0" Workplace
The office is no longer a place for solo deep work; it’s a social and collaborative hub.
* AI Sidekicks: Much of your "busy work" (scheduling, summarizing long email threads, or drafting reports) is handled by an Autonomous Agent. You spend your time reviewing its output rather than creating from scratch.
* Spatial Meetings: For colleagues working remotely, you don't just stare at a 2D Zoom screen. You might put on lightweight AR (Augmented Reality) glasses to see their "holographic" avatars sitting at the desk next to you.
🥗 1:00 PM: The Localized Lunch
The "20-minute neighborhood" concept has matured. Whether you are at home or at a suburban hub, most of what you need is within a short walk.
* Sustainability: You’ll notice fewer plastic containers; circular economy laws mean your takeaway lunch comes in standardized, returnable packaging that gets picked up by a local collection service.
🛒 6:00 PM: Frictionless Shopping
On your way home, you swing by a "dark store" or a smart locker.
* Automated Pickup: You didn't "go shopping." You tapped a button on the train, and your groceries were gathered by robots and placed in a temperature-controlled locker at your station or apartment building.
* Energy Management: Your EV (Electric Vehicle) is plugged in at home, but it’s not just drawing power. Through V2G (Vehicle-to-Grid) technology, your car sells power back to the grid during the evening peak to help stabilize the city's renewable energy flow.
🌙 10:00 PM: High-Definition Entertainment
* Generative Media: You might ask your TV to "create a 20-minute cozy mystery show starring my favorite actors." AI generates a personalized piece of content just for you.
* Digital Wellness: At a set time, your home’s lighting shifts to "circadian mode," filtering out blue light and dimming automatically to prep your brain for rest.
Key Shifts by 2032
* Currency: Physical cash is extremely rare; most transactions are biometric or via stablecoins.
* Privacy: You likely pay a subscription for a "Privacy Firewall" to manage how much of your data these AI systems can actually see.
* Climate: You check the "Air Quality Index" and "Heat Map" on your weather app as habitually as you check the temperature today.
 
The Metro Tunnel: If you're in Melbourne, you’re using the now-mature Metro Tunnel system. Trains run every few minutes, and you pay with a digital wallet or a biometric scan—physical Myki cards are a relic of the past.....

It took contractors 4 years to build 2 extra lanes in a 1.5km stretch of road near me.

Brisbane's new metro system which encompasses an overground busway and a 4 stop underground trains element has taken 6 years so far to build....

The underground bit is well behind schedule.
 
I really think this is an issue people need to take to the streets over.

Yet those like musk have people all arguing about immigrants (or propping up proxy movements against migrants) while the whole agenda is to make everyone’s jobs obsolete and the elite ai folks bank accounts massive.

So many thick fuckers are jus blindly walking into it yabbering about how great one nation is.

This is a much bigger threat to their existence than some bloke moving here from Madagascar.
 
  • Like
Reactions: Muz
So Anthropic said we want guarantees that you don't use our AI for mass surveillance and autonomous weapons systems and were refused so ChatGPT stepped in and said 'we'll do it'.

Fantastic. Skynet is on it's way.

 
So when you ask ChatGPT about homophobia it rightfully condemns it. But why does it only condemn non-Muslim countries’ homophobic laws and gives Muslim countries a pass?

 
UN: ssh…

Some of your Western progressive types will go around and slam hard line Christian homophobia but stay quiet about much worse stuff in happening in some Muslim countries....

Maybe like death penalties in Iran or Saudi Arabia, and huge majorities of Muslim Brotherhood affiliated communities hating on gay people.


The progressive's will do this to avoid seeming racist or “Islamophobic,” putting identity politics over real human rights. But the double standards are glaringly obvious.....
 
Remember how things always start up for free? Well it's time people to recoup the untold billions put into AI. Pay the cashier, sport.





( I've pasted the article in case you don't have an account or want one )

p.s If it's too long for you to read I don't give a flying fuck. :)



Over the past two years, a lot of people have become accustomed to using AI for all sorts of things, and most importantly, for free or at very low cost. Silicon Valley is familiar with this illusion: a very expensive technology is made widely available as companies race to capture market share, usage habits and dependency.

Think back to Uber, DoorDash and all those other “millennial lifestyle subsidies” described by Derek Thompson in The Atlantic in 2022: services that seemed magically cheap because, in reality, someone could afford to lose money to buy growth. Exactly the same thing is happening with AI. It costs far more to produce than we pay, meaning we are very close to the point at which companies will have to start charging serious money for access to it.

To understand what’s going on, we need to clarify something: just because the inference unit cost falls doesn’t mean that access will be cheaper for the end user. In fact, the opposite is more likely to happen. Models will be more efficient, chips will be better, and some tasks will cost less per token, but total spending will continue to skyrocket because demand is growing even faster as users want longer contexts, multimodality, agents, search, memory and task execution; and because companies need to amortize gigantic investments. Meta, for example, closed 2025 with $72 billion in capex and anticipated between $115 billion and $135 billion for 2026. In February, Anthropic announced $30 billion financing round valuing it at $380 billion post-money, while Microsoft, NVIDIA and Anthropic have sealed an alliance that will see Anthropic commit to buy $30 billion of Azure capacity and up to an additional gigawatt of compute. That no longer looks anything like an experiment: it looks like a capital-intensive industry that sooner or later will demand profitability.

The question now is not whether AI will continue to be free, but who will subsidize it and in exchange for what. In some cases, the answer will be very simple: the user will pay more for subscription, or several. In others, the employer will pay for it within a corporate suite, in the same way we pay for mail, office automation, CRM or cybersecurity today. Some will pay it indirectly, because it will be packaged in the operating system, the browser, the mobile, the cloud or the business rate. AI will become infrastructure, which is rarely charged transparently: it’s packaged, crossed, stuffed into lots of all kinds, and used to defend adjacent businesses with higher margins.

We’re already seeing a strong return of the old advertising model. OpenAI has already announced its approach: it is testing ads in the United States for Free and Go accounts, keeping the Plus, Pro, Business, Enterprise and Edu modalities ad-free, explaining: “advertisers have no ability to shape, rank, or alter ChatGPT’s responses”. Sound familiar? It means that even the company that did the most to popularize the conversational chatbot assumes that not everyone can or wants to pay a monthly fee, but that someone has to fund mass access. In other words: “Free AI” is starting to look suspiciously like the old internet economy, where the free user was not the customer, but the inventory.

Google is going down a similar route, but with a huge structural advantage: it already has the advertising business, the commercial inventory, and the search behavior. Its own documentation for advertisers makes it clear that ads can be shown above, below or within AI Overviews and that they are served taking into account both the user’s query and the content generated by the AI-generated summary. In other words, Google does not need to invent a business model for conversational AI: all it has to do is refine the one it’s been using for decades. Meta, for its part, does not depend on charging the user directly: its results documents and corporate communication insist that AI is already improving clicks, conversions, advertising quality and its messaging business with companies. AI doesn’t have to be paid for with a subscription: it can be done by harassing users with advertising, with more trade in mostly fraudulent products and more transactions and scams to the unwary across its platforms.

That brings us to a very interesting junction. The first way will be a premium AI, with better models, fewer limits, faster, more context, more tools and zero advertising, reserved for those who can and want to pay. The second will be a “free” or low-cost AI financed by ads, promotions, affiliation, e-commerce or vertical integration. The third, increasingly relevant, will be local execution: smaller, fine-tuned and reasonably good models running on personal computers, workstations or individually owned infrastructure. IEEE Spectrum was already signaling at the end of 2025 that the local execution of LLMs was going to drive the biggest change in laptops in decades, and tools such as Ollama already make it easier to run models such as DeepSeek-R1, Qwen3 or Gemma 3 on local hardware. The local alternative will not be universal or replace frontier models in everything, but for many tasks (summaries, writing, classification, code assistance, working with sensitive documents, etc.) it will be more than enough.

Here another important actor appears: open models, many of them Chinese. DeepSeek not only offers web access and APIs, but launched DeepSeek R1 as an open model with an MIT license, making it clear that its models and code could be freely distilled and commercialized. That completely alters the economic conversation, because it turns part of AI into a portable commodity. It is no longer just a matter of choosing between paying OpenAI, Anthropic or Google. It is also a question of deciding whether it is more worthwhile for a company, university, media or firm to deploy an open model in its own infrastructure, accepting somewhat lower performance in some tasks in exchange for sovereignty, cost control, privacy and the absence of lock-in.

Either way, local or open source aren’t going to be a universal panacea: for many users, buying a sufficiently powerful model and running a local LLM at home will be too expensive or simply unnecessary. Most don’t want to manage VRAM, quantizations or latencies — they want to ask questions and get answers. That is why the market will probably tend to stratify. There will be a free ad-supported or cross-subsidy tier, a subscription-based professional tier, a corporate tier integrated into enterprise suites, and a sovereign or domestic tier based on open models. Access to chatbots will not disappear. What will disappear is the naivety of believing that this access can be universal, continuous, multimodal and of high quality without someone paying an astronomical bill behind it.

In reality, the future of AI will be neither free nor expensive: it will be ambiguous. We will pay for it in one of four ways, or perhaps a combination: money, attention, data or dependency. Some will pay $20 or $200 a month to not see ads and have the best capabilities. Others will accept sponsored responses, stricter limits, and less privacy in exchange for apparently free access and becoming the product. Others won’t even know they’re paying for it, because it’ll be included in Microsoft 365, Google Workspace, a device or an enterprise contract. And a growing group will decide that, for certain uses, it is more profitable to download an open model and run it on their own.

The era of cheap AI isn’t coming to an end because the technology has failed: it’s happening because it works, generates dependency and someone has decided that it’s time to monetize it.
 
Mark Zuckerberg hailed it as the future of the internet, but after sinking nearly $80 billion into it, Meta is killing off the VR Horizon Worlds.

The company announced it will be removed from Quest headsets by mid-June, and will disappear from the Quest store in just a few weeks.
 
Back
Top