Possibly through ignorance or misunderstanding, btu I still think the tech behind NFTs may have some function, but it’s certainly not the weird pictures of badly colored in monkeys speculation market that happened there.
It could potentially work for DRM, in that you can have a key assigned to an identity that can later be transferred and not be dependent on a particular marketplace.
For example, you could buy a copy of whatever next year’s Call of Duty game will be, and have the key added to your NFT wallet. Then you could play it on XBox, Playstation, Steam, or GOG with that single license.
Of course that will never happen because that’d be more consumer friendly than we have now.
There are pretty great applications in medicine. AI is an umbrella term that includes working with LLMs, image processing, pattern recognition and other stuff. There are fields where AI is a blessing. The problem is, as JohnSmith mentioned, it’s the “solar battery” of the current day. At one point they had to make and/or advertise everything with solar batteries, even stuff that was better off with… batteries. Or the good ol’ plug. Hopefully, it will settle down in a few year’s time and they will focus on areas where it is more successful. They just need to find out which areas those are.
Like what? I discussed just 2 days ago with a friend who works in public healthcare, who is bullish about AI and best he could come up with DeepMind AlphaFold which is yes interesting, even important, and yet in a way “good old fashion AI” as has been the case for the last half century or so, namely a team of dedicated researchers, actual humans, focusing on a hard problem, throwing state of the art algorithms at a problem and some compute resources… but AFAICT there is so significant medicine research that made a significant change through “modern” AI like LLMs.
The first thing that comes to my mind is cancer screening. I had to look it up because I can’t always trust my memory, and I thought there was some AI involved in the RNA sequencing research for the Covid vaccine, but I actually remembered wrong.
Skimmed through the article and I found it surprisingly difficult to pinpoint what “AI” solution they actually covered, despite going as far as opening the supplementary data of the research they mentioned. Maybe I’m missing something obvious so please do share.
AFAICT they are talking about using computer vision techniques to highlight potential problems in addition to bringing the non annotated image.
This… is great! But I’d argue this is NOT what “AI” at the moment is hyped about. What I mean is that computer vision and statistics have been used, in medicine and elsewhere, with great success and I don’t see why it wouldn’t be applied. Rather I would argue the hype at he moment in AI is about LLM and generative AI. AFAICT (but again had a hard time parsing through this paper to get anything actually specific) none of that is using it.
FWIW I did specific in my post tht my criticism was about “modern” AI, not AI as a field in general.
I’m not at that exact company, but a very similar one.
It’s AI because we essentially we just take early scans from people who are later diagnosed with respiratory illnesses and using that to train a neural network to recognise early signs that a human doctor wouldn’t notice.
The actual algorithm we started with and built upon is basically identical to one of the algoriths used in a generative AI models (the one that takes an image, does some maths wizardry on it and tells you how close the image is to the selected prompt). Of course we heavily modify it for our needs so it’s pretty different in the end product, and we’re not using its output to feedback into a denoiser and we have a lot of cognitive layers and some other tricks to bring the reliability up to a point we can actually use it denoise, but it’s still at its core the same algorithm.
Can’t believe I’m doing this… but here I go, actually defending cryptocurrency/blockchain :
… so yes there are some functionalities to AI. In fact I don’t think anybody is saying 100% of it is BS and a scam, rather… just 99.99% of the marketing claims during the last decade ARE overhyped if not plain false. One could say the same for crypto/blockchain, namely that SQLite or a random DB or is enough for most people BUT there are SOME cases where it might actually be somehow useful, ideally not hijacked by “entrepreneurs” (namely VC tools) who only care about making money but not what the technology could actually bring.
Now anyway both AI & crypto use an inconceivable amount of resources (energy, water, GPU and dedicated hardware, real estimate, R&D top talent, human resources for dataset annotation including very VERY gruesome ones, etc) so yes even if in 0.01% they are actually useful one still must ask, is it worth it? Is it OK to burn literally tons of CO2eq … to generate an image that one could have done quite easily another way? Summarize a text?
IMHO both AI & crypto are not entirely useless in theory yet in practice have been :
hijacked by VCs and grifters or all kinds,
abused by pretty terrible people, including scammers and spammers,
absolutely underestimated in terms of resource consumption and thus ecological and societal impact
So… sure, go generate some “stuff” if you want to but please be mindful of what it genuinely costs.
i think you’ve got it backwards. the very same people (and their money) who were deep into crypto went on to new buzzword, which turns out to be AI now. this includes altman and zucc for starters, but there’s more
In programming the AI has real application, i have personally refactored code, designed services all by chatgpt which would take me days to do in hours, its just good at it. For non techies though i can’t say.
That’s because there’s a non zero amount of actually functionality. Chatgpt does some useful stuff for normal people. It’s accessible.
Contrast that to crypto, which was only accessible to tech folks and barely useful, or NFT which had no use at all.
Ok, I guess to be fair, the purpose of NFT was to separate chumps from their money, and it was quite good at that.
Possibly through ignorance or misunderstanding, btu I still think the tech behind NFTs may have some function, but it’s certainly not the weird pictures of badly colored in monkeys speculation market that happened there.
It could potentially work for DRM, in that you can have a key assigned to an identity that can later be transferred and not be dependent on a particular marketplace.
For example, you could buy a copy of whatever next year’s Call of Duty game will be, and have the key added to your NFT wallet. Then you could play it on XBox, Playstation, Steam, or GOG with that single license.
Of course that will never happen because that’d be more consumer friendly than we have now.
Basically functioning as a digital proof of purchase.
There are pretty great applications in medicine. AI is an umbrella term that includes working with LLMs, image processing, pattern recognition and other stuff. There are fields where AI is a blessing. The problem is, as JohnSmith mentioned, it’s the “solar battery” of the current day. At one point they had to make and/or advertise everything with solar batteries, even stuff that was better off with… batteries. Or the good ol’ plug. Hopefully, it will settle down in a few year’s time and they will focus on areas where it is more successful. They just need to find out which areas those are.
Like what? I discussed just 2 days ago with a friend who works in public healthcare, who is bullish about AI and best he could come up with DeepMind AlphaFold which is yes interesting, even important, and yet in a way “good old fashion AI” as has been the case for the last half century or so, namely a team of dedicated researchers, actual humans, focusing on a hard problem, throwing state of the art algorithms at a problem and some compute resources… but AFAICT there is so significant medicine research that made a significant change through “modern” AI like LLMs.
The first thing that comes to my mind is cancer screening. I had to look it up because I can’t always trust my memory, and I thought there was some AI involved in the RNA sequencing research for the Covid vaccine, but I actually remembered wrong.
Skimmed through the article and I found it surprisingly difficult to pinpoint what “AI” solution they actually covered, despite going as far as opening the supplementary data of the research they mentioned. Maybe I’m missing something obvious so please do share.
AFAICT they are talking about using computer vision techniques to highlight potential problems in addition to bringing the non annotated image.
This… is great! But I’d argue this is NOT what “AI” at the moment is hyped about. What I mean is that computer vision and statistics have been used, in medicine and elsewhere, with great success and I don’t see why it wouldn’t be applied. Rather I would argue the hype at he moment in AI is about LLM and generative AI. AFAICT (but again had a hard time parsing through this paper to get anything actually specific) none of that is using it.
FWIW I did specific in my post tht my criticism was about “modern” AI, not AI as a field in general.
I’m not at that exact company, but a very similar one.
It’s AI because we essentially we just take early scans from people who are later diagnosed with respiratory illnesses and using that to train a neural network to recognise early signs that a human doctor wouldn’t notice.
The actual algorithm we started with and built upon is basically identical to one of the algoriths used in a generative AI models (the one that takes an image, does some maths wizardry on it and tells you how close the image is to the selected prompt). Of course we heavily modify it for our needs so it’s pretty different in the end product, and we’re not using its output to feedback into a denoiser and we have a lot of cognitive layers and some other tricks to bring the reliability up to a point we can actually use it denoise, but it’s still at its core the same algorithm.
Thanks, any publication please to better understand how it work?
Can’t believe I’m doing this… but here I go, actually defending cryptocurrency/blockchain :
… so yes there are some functionalities to AI. In fact I don’t think anybody is saying 100% of it is BS and a scam, rather… just 99.99% of the marketing claims during the last decade ARE overhyped if not plain false. One could say the same for crypto/blockchain, namely that SQLite or a random DB or is enough for most people BUT there are SOME cases where it might actually be somehow useful, ideally not hijacked by “entrepreneurs” (namely VC tools) who only care about making money but not what the technology could actually bring.
Now anyway both AI & crypto use an inconceivable amount of resources (energy, water, GPU and dedicated hardware, real estimate, R&D top talent, human resources for dataset annotation including very VERY gruesome ones, etc) so yes even if in 0.01% they are actually useful one still must ask, is it worth it? Is it OK to burn literally tons of CO2eq … to generate an image that one could have done quite easily another way? Summarize a text?
IMHO both AI & crypto are not entirely useless in theory yet in practice have been :
So… sure, go generate some “stuff” if you want to but please be mindful of what it genuinely costs.
i think you’ve got it backwards. the very same people (and their money) who were deep into crypto went on to new buzzword, which turns out to be AI now. this includes altman and zucc for starters, but there’s more
In programming the AI has real application, i have personally refactored code, designed services all by chatgpt which would take me days to do in hours, its just good at it. For non techies though i can’t say.