I can see some minor benefits - I use it for the odd bit of mundane writing and some of the image creation stuff is interesting, and I knew that a lot of people use it for coding etc - but mostly it seems to be about making more cash for corporations and stuffing the internet with bots and fake content. Am I missing something here? Are there any genuine benefits?
Much like automated machinery, it could in theory free the workers to do more important, valuable work and leave the menial stuff for the machine/AI. In theory this should make everyone richer as the companies can produce stuff cheaper and so more of the profits can go to worker salaries.
Unfortunately what happens is that the extra productivity doesn’t go to the workers, but just let’s the owners of the companies take more of the money with fewer expenses. Usually rather firing the human worker rather than giving them a more useful position.
So yea I’m not sure myself tbh
No no you found the actual “use” for AI as far as businesses go. They don’t care about the human cost of adopting AI and firing large swaths of workers just profits.
Which is why governments should be quickly moving to highly regulate AI and it’s uses. But governments are slow plodding things full of old people who get confused with toasters.
As always capitalism kills.
This is the part that bothers me the most, I think.
Trouble is the best way to regulate it isn’t clear. If the new tool can do the job at least as well and cheaper, just disallowing it is less beneficial to society. You can tax its use until it is only a little cheaper, but then you have to get people to approve of taxes. Et cetera
This already happened with the industrial revolution. It did make the rich awfully rich, but let’s be honest. People are way better off today too.
It’s not perfect, but it does help in the long run. Also, there’s a big difference in which country you’re in.
Capitalist-socialism will be way better off than hard core capitalism, because the mind set and systems are already in place to let it benefit the people more.
Yes, that way the government will be able to make sure it benefits the right people. And we will call it the national socialism… wait… no!
The question wasn’t "In Theory, are there any genuine benefits" it was if there are currently right now.
Most email spam detection and antimalware use ML. There are use cases in medicine with trying to predict whether someone has a condition early
It’s also being used in drug R&D to find similar compounds like antimicrobial activity, afaik.
Medical use is absolutely revolutionary. From GP’s consultations to reading tests results, radios, AI is already better than humans and will be getting better and better.
Computers are exceptionally good at storing large amount of data, and with ML they are great at taking a lot of input and inferring a result from that. This is essentially diagnosing in a nutshell.
I read that one LLM was so good at detecting TB from Xrays that they reverse engineered the “black box” code hoping for some insight doctors could use. Turns out, the AI was biased toward the age of the Xray machine that took each photo because TB is more common in developing countries that have older equipment. Womp Womp.
A large language model was used to detect TB in X-ray? Do you not just mean Machine Learning?
There are supposedly multiple Large Language Model Radiology Report Generators in development. Can’t say if any of them are actually useful at all, though.
okay, but there still needs to be a part that processes the scan images and that’s not LLM.
So you’re saying because the LLM isn’t operating the machinery and processing the data start to finish without any non-LLM software then none of it is LLM? Stay off the drugs, kid.
What’s TB?
Tuberculosis
Tuberculosis
Watch any video at random by John Green (vlogbrothers, and author of several successful books that I haven’t read) and you’ll know more than you could ever hope about TB.
That’s super interesting, TIL
I hadn’t considered this. It’s interesting stuff. My old doctor used to just Google stuff in front of me and then repeat the info as if I hadn’t been there for the last five minutes.
This sort of feels like someone using a PC for the first time in 1989 and asking what it does that they can’t do on a piece of paper with a calculator. They may not have been far off at the time, but they would be missing the point. This is a paradigm shift that allows for a single application to fulfill the role of, eventually, infinite applications. And yes it starts with mundane tasks. You know, the kind people don’t want to do themselves.
The problem is that most of the things it feels we can currently see applications for are… Kinda bad. Actually repulsive frankly. Like I don’t want those things. I don’t wanna talk to an ai to order my big mac or instead of just getting a highlighted excerpt from a webpage when I search things. I don’t want a world where artists have to compete with image generators to make a living, or where weird creepy porn that chases and satisfies ever more unrealistic expectations is the norm. I don’t want to talk to chat bots that use statistical analysis to convincingly sell me lies they don’t understand.
I just wanna talk to actual people. I wanna see art made by people, I wanna look at pictures of the bodies of actual human beings, I wanna see the animations that humans poured their soul into, I wanna see the actual text a person wrote on the subject I’m researching. I wanna do simple things, in simple ways, and the world that it feels like AI companies are offering us honestly sucks, and as soon as that door is fully opened things will just be permanently worse. Convenience is great but I don’t want a robot to feed me a weird gross regurgitation of reality or approximation of human interaction to me like a bird that chews and digests its food for its babies. I don’t wanna consume the spit-up of an overgrown algorithm. Its a gross idea of how we could engage with the world. It obfuscates the humanity of whatever it touches, and the humanity is the worthwhile part. There comes a point where the abstraction is abstracting away everything of value and leaving you with the most sanitized version.
If ai was just gonna be used to improve medicine and translate books or webpages, or as interactive accsessibiltiy tool, or do actually helpful shit maybe I wouldn’t be so opposed to it, but it feels like everything consumer or employee facing that ai is offering is awful and something I absolutely do not want. But companies don’t care, and that shitty world is gonna be the reality cause it’s profitable
Well then I guess I’d ask you to reconsider your answer but from the perspective of 1989. I’d imagine that’d be the same answer you’d give to the personal computer. AI isn’t going to make things more complicated It’s going to make things simpler. But people will create a more complicated (diverse) world in the vacuum that leaves. Just like an ox pulled plow made it easier to till farmland led to more complex agricultural societies. This type of advancement has been the story of human history since its beginning. Your perspective seems most concerned with people using this advancement against you, but our future now holds the possibility of having this AI on your side.
Using it to synopsize complicated TOS that corporations use to obfuscate what you’re agreeing to, actually answering questions instead of needing to search through ad riddled web pages, allowing more people to become artists and create their vision.
Your examples of useful ways to use AI are great. So help build or support them. If you only look at the future corporations are selling you, yeah, it’s going to look like a bleak corporate nightmare. But the truth is technology empowers the individual. So we need to do something good with that power.
TBF if a mathematician or a programmer cannot do it on paper then they’ve kind of failed and probably won’t have any notable impact. Paper math didn’t end when consumer computers came about.
Wrap it up, climate scientists, the show is over! This lad said he can do your job without the supercomputer.
You think Supercomputers are designing and building themselves, you fucking donkey? You think ChatGPT has the solution to Climate Change?
I know plenty of modern programmers who are empowered by the ease at which they can learn the trade now. Some never go deeper than front end developer, because there’s good money there. That job would look nothing like it does today if it had to be done by hand.
Ah yes, the html programmers. Top minds of our generation, them. /s
AI is a very broad topic. Unless you only want to talk about Large Language Models (like ChatGPT) or AI Image Generators (Midjourney) there are a lot of uses for AI that you seem to not be considering.
It’s great for upscaling old videos: (this would fall under image generating AI since it can be used for colorizing, improving details, and adding in additional frames) so that you end up with something like: https://www.youtube.com/watch?v=hZ1OgQL9_Cw
It’s useful for scanning an image for text and being able to copy it out (OCR).
It’s excellent if you’re deaf, or sitting in a lobby with a muted live broadcast and want to see what is being said with closed captions (Speech to Text).
Flying your own drone with object detection/avoidance.
There’s a lot more, but basically, it’s great at taking mundane tasks where you’re stuck doing the same (or similar) thing over, and over, and over again, and automating it.
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=hZ1OgQL9_Cw
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Yeah that’s interesting.
I think most of those are only labelled AI to generate tech hype, though? Like, sure, machine learning and maybe even LLM can and are used for those, but it isn’t a machine given human discernable input and pretending to give human output.
“AI” is the broadest umbrella term for any of these tools. That’s why I pointed out that OP really should be a bit more specific as to what they mean with their question.
AI doesn’t have the same meaning that it had over 10 years ago when we used to use it exclusively for machines that could think for themselves.
They are the greatest gift to solo-brainstorming that I’ve ever encountered.
_ /\ _
The fruit of those brainstorming sessions are like Homer Simpson designing a new car.
You’re confusing brainstorming with content generation. LLMs are great for brainstorming: they can quickly churn out dozens of ideas for my D&D campaign, which I then look through, discard the garbage, keep the good bits of, and riff off of before incorporating into my campaign. If I just used everything it suggested blindly, yeah, nightmare fuel. For brainstorming though, it’s fantastic.
Exactly. It can generate those base-level ideas much faster and worth higher fidelity than humans can without it, and that can see us at the hobby level with DND, or up at the business level with writers rooms and such.
The important point is that you still need someone good at making the thing you want to look at and finish the thing you’re making, or you end up with paintings with too many fingers or stories full of contradictions
Any kid who uses it to craft their campaign is lazy and depriving themselves of a valuable experience, any professional who uses it to write a book, script, or study is wildly unethical, and both are creating a much much worse product than a human without reliance on them. That is the reality of a model who at 100% accuracy would be exactly as flawed as human output, and we’re nowhere near that accuracy.
But the point is that you don’t use it to make the campaign or write the book. You use it as a tool to help yourself make a campaign or write a book. Ignoring the potential of ai as a tool is silly just because it can’t do the whole job for you. That would be a bit like saying you are a fool for using a sponge when washing because it will never get everything by itself…
I get it now! You don’t use it for the thing you use it for but instead as a tool to create the thing that you’ve used it for for yourself because the magic was inside all of us but also the GPT all along. /sarcasm
“don’t feed the trolls,” they said, but did she ever listen?
No, I guess I didn’t…
I would retort that the exact opposite is true, that content generation is the only thing LLMs are good at because they often forget the context of their previous statements.
I think we’re saying the same thing there: LLMs are great at spewing out a ton of content, which makes them a great tool for brainstorming. The content they create is not necessarily trustworthy or even good, but it can be great fuel for the creative process.
My stance is that spewing out a ton of flawed unrelated content is not conducive to creative good content, and therefor LLM is not useful for writing. That hasn’t changed.
AI has some interesting use cases, but should not be trusted 100%.
Like github copilot ( or any “code copilot”):
- Good for repeating stuff but with minor changes
- Can help with common easy coding errors
- Code quality can take a big hit
- For coding beginners, it can lead to a deficit of real understanding of your code
( and because of that could lead to bugs, security backdoors… )
Like translations ( code or language ):
- Good translation of the common/big languages ( english, german…)
- Can extend a brief summary to a big wall of text ( and back )
- If wrong translated it can lead to that someone else understands it wrong and it misses the point
- It removes the “human” part. It can be most of the time depending on the context easily identified.
Like classification of text/Images for moderation:
- Help for identify bad faith text / images
- False Positives can be annoying to deal with.
But dont do anything that is IMPORTANT with AI, only use it for fun or if you know if the code/text the AI wrote is correct!
Adding to the language section, it’s also really good at guessing words if you give it a decent definition. I think this has other applications but it’s quite useful for people like me with the occasionally leaky brain.
I have sometimes the same issue!
Actually the summaries are good, but you have to know some of it anyway and then check to see if it’s just making stuff up. That’s been my experience.
Anything that requires tons of iteration can be done way faster with AI. Finding new chemical formulas for medicine, as an example. It takes a “throw everything at the wall and see what sticks” approach, but it’s still more effective than a human.
Brute force is AI now?
brute force would be “throw at the wall one at a time until one stick”
As long as everything gets thrown it’s still brute force, but the reason they use ai for it is because it can throw a lot more a lot faster.
I think by broad definitions it can be, yes.
Think about it. AI is just throwing a ton of sample data in and filtering out the results that are least correct.
Presumably in order to determine whether the eg chemical is worth looking at in the first place
AI is a revolution in learning.
Very true. I learned how to code surprisingly fast.
And even the mistakes the AI made was good, because it made me learn so much seeing what changes it did to fix it.
Bullshit. Reading a book on a language is just as fast and it doesn’t randomly lie or make up entire documentations as an added bonus.
I don’t believe you, this seems like a bot posting.
deleted by creator
An interesting point that I saw about a trail on one of the small, London Tube stations:
-
most of the features involved a human who could come and assist or review the footage. The AI being able to flag wheelchair users was good because the station doesn’t have wheelchair access with assistance.
-
when they tried to make a heuristic for automatically flagging aggressing people, they found that people with the arms up tend to be aggressive. This flagging system led to the unexpected feature that if a Transport For London (TFL) staff member needed assistance (i.e. if medical assistance was necessary, or if someone was being aggressive towards them, the TFL staff member could put their arms up to bring the attention onto them.
That last one especially seems neat. It seems like the kind of use case where AI has the most power when it’s used as a tool to augment human systems, rather than taking humans out of stuff.
“Once implemented, the system was able to identify many black men who were then immediately confronted. Confrontations with black men are now documented at 87% of aggressive confrontations in TFL locations.” /sarcasm
I don’t think designing AI to make generalizations based on physical appearances is a very good idea to start with.
While not AI. That’s my goal with my home automation. To augment my life to make certain things easier and/or more efficient.
https://www.home-assistant.io/blog/2016/01/19/perfect-home-automation/
-
I pasted your question verbatim into Bing Chat. Here’s what it responded with:
Artificial Intelligence (AI) indeed has a wide range of benefits that extend beyond the ones you’ve mentioned. Here are some areas where AI is making a significant impact:
- Healthcare: AI is used in predicting disease outbreaks, drug discovery, personalized treatment plans, and improving patient care. For example, machine learning models can analyze medical images to detect diseases at early stages.
- Education: AI can provide personalized learning experiences, identify gaps in learning materials, and automate administrative tasks. It can adapt to individual learning styles, making education more accessible.
- Environment: AI can help in climate modeling, predicting natural disasters, and monitoring wildlife. It’s also used in optimizing energy usage in buildings and manufacturing processes, contributing to sustainability.
- Transportation: Autonomous vehicles use AI for navigation, safety, and traffic management. AI can also optimize logistics, leading to reduced costs and environmental impact.
- Security: AI can enhance cybersecurity by detecting unusual patterns or anomalies in data, helping to prevent cyber attacks. It’s also used in surveillance systems to identify potential threats.
- Accessibility: AI can help people with disabilities by providing tools that improve their ability to interact with the world. Examples include speech recognition for those unable to use a keyboard, and visual recognition systems that can describe the environment to visually impaired individuals.
While it’s true that AI can be used to generate profits for corporations, it’s important to remember that many of these advancements also lead to societal benefits. However, like any technology, AI can be misused, and it’s crucial to have regulations and ethical guidelines in place to prevent such misuse. The creation of “bots and fake content” you mentioned is one such misuse, and efforts are ongoing to combat these issues.
In conclusion, AI has the potential to greatly benefit society in many ways, but it’s equally important to be aware of and address its challenges.
Seems like a pretty comprehensive list of the things I’m aware of myself. There’s also tons of interesting future applications being worked on that, if they pan out, will be hugely beneficial in all sorts of ways. From what I’ve seen of what the tech is capable of we’re looking at a revolution here.
Seems a bit biased to ask an AI for the benefits of AI…
Not saying anything specific is wrong, just that appearances matterIt was in part a demonstration. I see a huge number of questions posted these days that could be trivially answered by an AI.
Try asking Bing Chat for negative aspects of AI, it’ll give you those too.
Was thinking the same… let’s ask Honest Joe the car seller which one is the best mean of transport.
I think implying that it has a bias is giving the Advanced Auto Prediction Engine a bit too much credit.
Oh I am in fact giving the giant auto complete function little credit. But just like any computer system, an AI can reflect the biases of it’s creators and dataset. Similarly, the computer can only give an answer to the question it has been asked.
Dataset wise, we don’t know exactly what the bot was trained on, other than “a lot”. I would like to hope it’s creators acted in good judgement, but as creators/maintainers of the AI, there may be an inherent (even if unintentional) bias towards the creation and adoption of AI. Just like how some speech recognition models have issues with some dialects or image recognition has issues with some skin tones - both based on the datasets they ingested.
The question itself invites at least some bias and only asks for benefits. I work in IT, and I see this situation all the time with the questions some people have in tickets: the question will be “how do I do x”, and while x is a perfectly reasonable thing for someone to want to do, it’s not really the final answer. As reasoning humans, we can also take the context of a question to provide additional details without blindly reciting information from the first few lmgtfy results.
(Stop reading here if you don’t want a ramble)
AI is growing yes and it’s getting better, but it’s still a very immature field. Many of its beneficial cases have serious drawbacks that mean it should NOT be “given full control of a starship”, so to speak.
- Driverless cars still need very good markings on the road to stay in lane, but a human has better pattern matching to find lanes - even in a snow drift.
- Research queries are especially affected, with chatbots hallucinating references that don’t exist despite being formatted correctly. To that specifically:
- Two lawyers have been caught separately using chatbots for research and submitting their work without validating the answer. They were caught because they cited a case which supported their arguments but did not exist.
- A chatbot trained to operate as a customer support representative invented a refund policy that did not exist. As decided by small claims court, the airline was forced to honor this policy
- In an online forum while trying to determine if a piece of software had a specific functionality, I encountered a user who had copied the question into chatgpt and pasted the response. It was a command option that was exactly what I and the forum poster needed, but sadly did not exist. On further research, there was a bug report open for a few years to add this functionality that was not yet implemented
- A coworker asked an LLM if a specific Windows powershell commands existed. It responded with documentation about a very nicely formatted command that was exactly what we needed, but alas did not exist. It had to be told that it was wrong four times before it gave us an answer that worked.
While OP’s question is about the benefits, I think it’s also important to talk about the drawbacks at the same time. All that information could be inadvertently filtered out. Would you blindly trust the health of you child or significant other to a chatbot that may or may not be hallucinating? Would you want your boss to fire you because the computer determined your recorded task time to resolution was low? What about all those dozens of people you helped in side chats that don’t have tickets?
There’s a great saying about not letting progress get in the way of perfection, meaning that we shouldn’t get too caught on getting the last 10-20% of completion. But with decision making that can affect peoples’ lives and livelihoods, we need to be damn sure the computer is going to make the right decision every time or not trust it to have full controls at all.
As the future currently stands, we still need humans constantly auditing the decisions of our computers (both standard procedural and AI) for safely’s sake. All of those examples above could have been solved by a trained human gating the result. In the powershell case, my coworker was that person. If we’re trusting the computers with at much decision making as that Bing answer proposes, the AI models need to be MUCH better trained at how to do their jobs than they currently are. Am I saying we should stop using and researching AI? No, but not enough people currently understand that these tools have incredibly rough edges and the ability for a human to verify answers is absolutely critical.
Lastly, are humans biased? Yes absolutely. You can probably see my own bias in the construction of this answer.
But with decision making that can affect peoples’ lives and livelihoods, we need to be damn sure the computer is going to make the right decision every time or not trust it to have full controls at all.
👏👏👏
Yes, dystopia already arrived and we will all going to suffer. Here are just a few simple examples of blind trust of algorithms which ruined people’s lives. And day by day more are coming.
Before AI: https://sg.finance.yahoo.com/news/prison-bankruptcy-suicide-software-glitch-080025767.html
After AI: https://news.yahoo.com/man-raped-jail-ai-technology-210846029.html
Our software uses ML to detect tax fraud and since tax offices are usually understaffed they can now go after more cases. So yes?
It’s sped up my retouching workflows. I can automate things that a few years ago would’ve needed quite a lot of time spent with manual brush work.
Also in the creative industries, it’s a massive time saver for conceptual work. Think storyboarding and scamping, first stage visuals that kind of thing.
Someone I know recently published in Nature Communications an enormous study where they used machine learning to pattern match peptides that are clinically significant/bioactive (don’t forget, the vast amount of peptides are currently believed to be degradation products).
Using mass spectrometry, they effectively shoot a sawed off shotgun at a wall then using machine learning to detect pellets that may have interesting effects. This opens up for new understanding in the role peptides play in the translational game as well as a potential for a huge amount of new treatments for a vast swathe of diseases.
Sounds similar to some of the research my sister has done in her PhD so far. As I understand, she had a bunch of snapshots of proteins from a cryo electron microscope, but these snapshots are 2D. She used ML to construct 3D shapes of different types of proteins. And finding the shape of a protein is important because the shape defines the function. It’s crazy stuff that would be ludicrously difficult and time-consuming to try to do manually.
There was an interesting talk about it at the last CCC. However, I also remember a few reports casting doubts upon the results of this method.
One of the better uses I’ve heard of is in search and rescue type situations. Using AI to find specific items, people or anomalies on a map or video feed can be helpful.
An example regarding wildfires: