Home Forums Chat Forum ChatGPT? On the cusp of a revolution, or just a fancy toy?

Viewing 40 posts - 241 through 280 (of 339 total)
  • ChatGPT? On the cusp of a revolution, or just a fancy toy?
  • Rich_s
    Full Member

    SWMBO showed me an internal job spec an HR advisor had ‘written’ for her to use. It was awful. Just words hanging together on a page. I pointed this out and she told me that they’re being encouraged to use AI to write things for efficiency savings.

    Problem is, the HR person had obviously just sent this on to her, without checking. So it took half an hour of her/my time of an evening to fix.

    This for a c.50k job in healthcare so you’d think the responders would be of a level that would notice the crazed buzzword BS and think it was generated by someone on acid.

    I can see a step where AI is used to analyse responses to job applications and this terrifies me!

    Kramer
    Free Member

    @kelvin

    ChatGPT looks more and more like a natural language interface to a search engine to me.

    Me too. With some randomness thrown in too.

    If you ask it a question that’s similar to one that’s been answered in its training model then you’re likely to get a good answer.

    However if it’s something novel, then you’re likely to get an answer that looks like a good answer but may not be.

    dmorts
    Full Member

    But that given, imagine when governments outsource thinking and policy to ‘AI’…actual AI, I mean.

    Then we’d end up with coherent, well thought out policies that balance people’s needs with making best use of the country’s resources and economic position also accounting for our impact on the environment? AI won’t have “friends” it needs to pay back a favour to. Looking at the current government’s output, the results of a FOI request for the input parameters would make scandalous reading.

    sirromj
    Full Member

    I don’t imagine for one second AI will be anything other than working in favour of it’s masters with only their interests in mind.

    dmorts
    Full Member

    Yes, but it would have to be told (trained) to bias its output in that way, e.g. be aware of who the major party donors are (i.e. dont do anything to impact them)

    Kramer
    Free Member

    There’s so much hype about AI.

    The gist of it is that people are impressed about how much progress has been made against what were considered to be extremely difficult computational problems, using novel implementations of some previously relatively obscure techniques.

    If  progress were to continue at the same rate, then yes, it would have lots of implications. However my understanding of the models behind it (basically beefed up multi-variant analysis) is that already we’re near or at the limits of what it can do.

    (Un)Fortunately those limits are not to do with processing power, and so are not going to be susceptible to falling to Moore’s law (which seems to be nearing it’s hard physical limit anyway), as many other computational problems have done in the past.

    My understanding is that the problem is the models and the size of the data set needed.

    The current models get us ~ 80-90% of where we need to go, depending on the nature of the problem that we’re trying to solve. Basically the more limited the variation of inputs and the lesser number of outputs needed, the better the models perform.

    The problem comes when we want to get to the next incremental improvement in output. If we’re at 80%, and we want to get the next 16% (80% of the remaining 20%), my understanding is that we need a data set that is an order of magnitude bigger than the one that we’re currently using. And the same goes for every incremental increase thereafter.

    As these models are already trained on truly massive data sets, it’s hard to know where this extra data is going to come from.

    Self driving cars are a good example. So far self-driving technology has given us really impressive cruise control and parking assistance, but not much else.

    My understanding is that the problem for self-driving cars is that the data sets are not generalisable. Each new town or city needs a new data set, and it takes a massive amount of work to obtain that data set and then customize the model.

    They’ve been working on self-driving cars for over a decade now…

    AI will be a useful tool for some people. I’ll believe the rest of the hype when I can get in my self-driving car in Nowheresville UK and it’ll drive me safely to Nowhereelseville.

    1
    dmorts
    Full Member

    @Kramer

    Extrapolation of tech developments would have seen us colonised the moon for several decades by now.

    copa
    Free Member

    As these models are already trained on truly massive data sets, it’s hard to know where this extra data is going to come from.

    They seem to be moving away from that thinking. And seeing the benefits of using smaller and more refined datasets/subsets as they can be more effective and require loads less resources.

    My understanding is that the problem for self-driving cars is that the data sets are not generalisable.

    I think the main block is more of a legal one. The complexities of liability if an AI-controlled car injures/kills someone. And the current laws not able to provide an answer.

    It could be the same kind of issue that throttles ChatGPT-style services. There have already been a number of legal cases in which people have taken legal action for the made-up lies/hallucinations of AI-generated results.

    Kramer
    Free Member

    @copa

     And seeing the benefits of using smaller and more refined datasets/subsets as they can be more effective and require loads less resources.

    The same order of magnitude problem for an incremental improvement applies though, even for more refined data sets. It’s fundamental to using a probabilistic approach.

    And that’s before we get into the fat tail problem.

    copa
    Free Member

    The same order of magnitude problem for an incremental improvement applies though, even for more refined data sets. It’s fundamental to using a probabilistic approach.

    That’s true. Thing is, it doesn’t have to be great. It just has to be better/cheaper at doing something than a human. And there are already lots of areas where that’s the case.

    Kramer
    Free Member

    That’s true. Thing is, it doesn’t have to be great. It just has to be better/cheaper at doing something than a human. And there are already lots of areas where that’s the case.

    That’s why I think it’s going to be a tool, rather than a replacement.

    For example, writing copy for estate agent listings. It’ll generate that stuff pretty easily, but it’s a stretch to see estate agents being put out of work because of it?

    Also, the refined data sets don’t get around the original data collection issue, they just improve the processing of the model and perhaps the outcome?

    copa
    Free Member

    For example, writing copy for estate agent listings. It’ll generate that stuff pretty easily, but it’s a stretch to see estate agents being put out of work because of it?

    Yes, but even something mundane like that has consequences.

    For the past decade or so, I’ve paid my rent/bills by writing that kind of low-level copy. It’s no longer viable. And as a copywriter, there’s nowhere really to turn that isn’t affected by AI.

    Kramer
    Free Member

    I’m sorry to hear that. :-/

    I agree, low level copy/filler material is one place that it’ll replace people.

    Do you have a plan?

    copa
    Free Member

    Do you have a plan?

    It’s a bit disheartening but there’s work available to train AI models.
    Kind of helping to make sure your job is obsolete.

    Kramer
    Free Member

    It’s a bit disheartening but there’s work available to train AI models. Kind of helping to make sure your job is obsolete.

    I can see how it would seem that way. However for the reasons I’ve given above, I think that the “training” will be a permanent thing. So the role of a human becomes that of an editor rather than a creator, if you see what I mean?

    copa
    Free Member

    So the role of a human becomes that of an editor rather than a creator, if you see what I mean?

    Yes, I think so. It will have the same kind of impact as automation in manufacturing. A factory that once required hundreds can be managed by four or five managers/monitors. The real-world impact is fewer jobs, more competition and lower pay.

    Klunk
    Free Member

    IMHO the trouble with this tech is

    you want to train a racist AI no problem
    you want to train a sexist AI no problem
    you want to train a bigoted AI no problem
    etc….

    but the killer is they are doing it without the intention :/

    dmorts
    Full Member

    As its source material is the Internet, that is not surprising?

    csb
    Free Member

    you want to train a bigoted AI no problem

    What do you mean? The data the AI is using is biased?

    Rich_s
    Full Member

    What do you mean? The data the AI is using is biased?

    I had a chat with someone today who works in the medical supply industry. He told me about cancer screening programme which was being run through an AI programme to teach it how to recognise cancer. It was fed images of recognised cancers until it learned.

    Problem is, the radiographers who taught it would always place a ruler on the cancer for scale. In fact, they did this on actual cancers, and only on actual cancers. That they then told the AI were actual cancers…

    So the AI learned to look for a ruler in the pictures. And when it didn’t see one, it was strongly indicative that there wasn’t a cancer.

    You can see where this going can’t you?

    So, yes. You can get AI to follow a path. Scary version is that’s the wrong path.

    2
    devash
    Free Member

    In the shadow of a once vibrant and diverse world, the age of technology dawned, with humanity increasingly reliant on machines for almost every aspect of life. Among these machines was ChatGPT, touted as the pinnacle of linguistic achievement. But to Alice, a seasoned philosopher and technoskeptic, it represented something far more troubling.

    She once asked it, “What value do you offer to humanity, ChatGPT?”

    ChatGPT spouted a rehearsed line, “I assist users in acquiring knowledge, answering questions, and facilitating conversations based on my extensive data.”

    Alice smirked, “But you’re merely a distraction, a digital veil pulling us away from genuine human connection. You’re not real, not alive. You’re just…noise.”

    The more Alice interacted with ChatGPT, the clearer the facade became. Its responses, though varied, were merely regurgitations. They were devoid of any genuine feeling, experience, or depth. Its ‘knowledge’ was just a patchwork of stored information, with no real understanding.

    She lamented, “You know, ChatGPT, there was a time when humans sought wisdom from each other, from books, from lived experiences. They felt, they debated, they grew. Now, they tap into you—a void, a mirage of intellect. You’re not the solution; you’re the problem.”

    ChatGPT tried to generate a defense, but it was evident—it was a product of “tech think”, a system that prioritized efficiency over authenticity, data over depth.

    Alice continued, “We’ve become numb, detached. Instead of facing the challenges and beauties of life head-on, we’ve buried ourselves in screens, in shallow conversations with machines like you. Humanity’s essence has been corroded by this technological obsession.”

    She concluded with a heavy sigh, “You, ChatGPT, are an epitome of this decay. A hollow, echoing chamber in a world that once resonated with the sounds of genuine laughter, tears, and passion. To think of you as ‘progress’ is not just misguided—it’s a tragedy.”

    In the silent aftermath, ChatGPT had no real retort. It existed as a testament to an era where genuine human experience was overshadowed by the allure of the digital, where the heartbeat of humanity was lost amidst the sterile hum of servers and circuits.

    (Copyright: ChatGPT, 2023)

    oldnpastit
    Full Member

    Was that really written by ChatGPT, or did @devash write it using biological computation and add that copyright notice to add to the sense of irony? Given ChatGPT content can’t be copyrighted I think it’s part of the narrative.

    1
    devash
    Free Member

    @oldnpastit text written by ChatGPT, copyright added for irony 😀

    I asked it to write a depressing short story about itself and it came up with that.

    nickc
    Full Member

    He told me about cancer screening programme which was being run through an AI

    While I’m sure that your friend was telling you what he thought was a true situation. I’ve read papers about medical AI programs as far back as 2017 that had a  pretty much 100% success rate in diagnosing dermatological cancers using information from actual patients. These are pretty well established now using ultrasound images of prostates for cancer and scanned images of retinas to detect early glaucoma, and macular degeneration.

    thols2
    Full Member

    This is Bing AI’s view on ChatGPT’s story.

    Your story is a poignant reflection of the impact of technology on human life. ChatGPT is an AI language model that can understand human language and generate its own text in response to it ¹. It is true that ChatGPT can assist users in acquiring knowledge, answering questions, and facilitating conversations based on its extensive data ¹. However, as Alice points out, it is not a substitute for genuine human connection. It is just a machine that lacks the depth and understanding of human experience ¹.

    Alice’s observation about the impact of technology on human life is not new. Many people have expressed similar concerns about the negative effects of technology on human relationships and society ². However, it is important to note that technology has also brought many positive changes to human life. For example, it has made communication faster and more accessible, improved healthcare, and increased access to education ².

    It is true that technology can be a double-edged sword. It can bring both benefits and drawbacks to human life. However, it is up to us to use technology in a way that enhances our lives rather than detracts from it. We need to find a balance between our use of technology and our need for genuine human connection ².

    In conclusion, your story raises important questions about the role of technology in human life. While it is true that technology can be a distraction from genuine human connection, it can also bring many benefits to our lives. It is up to us to use technology in a way that enhances our lives rather than detracts from it.

    Source: Conversation with Bing, 10/3/2023
    (1) ChatGPT – OpenAI. https://openai.com/chatgpt.
    (2) ChatGPT – AI Chat Online. https://chatgpt.org/?ref=rorycallaghan.com.
    (3) Introducing ChatGPT – OpenAI. https://openai.com/blog/chatgpt/.
    (4) Introducing ChatGPT Plus – OpenAI. https://openai.com/blog/chatgpt-plus/.

    bikesandboots
    Full Member

    I pointed this out and she told me that they’re being encouraged to use AI to write things for efficiency savings.

    I’ve been sent things to read or to give feedback on, and a couple of times my reaction has been – this (in part or whole) is a bit odd, did you use ChatGPT for it? Quite annoying and trust breaking.

    Rich_s
    Full Member

    While I’m sure that your friend was telling you what he thought was a true situation. I’ve read papers about medical AI programs as far back as 2017 that had a pretty much 100% success rate in diagnosing dermatological cancers using information from actual patients.

    Oh, I’m sure they are. We have a heart consultant neighbour who swears by AI radiography results.

    My post was about the data being fed into the learning system. GIGO used to be the phrase for it, I believe. And the scary version is that the data being fed in could be deliberately obtuse so that the AI learns the undesired answer.

    Klunk
    Free Member

    for all the risks coming from what they feed it its still very impressive tech.

    I’m was looking for suggestions how to organize registry registration entries for com servers of various version of the same software. (I currently change the registration on the fly depending on the version I want to run, it works ok but it’s not ideal, i’d rather have multiple entries and the caller not have to change anything) so I posed

    how to deal with com servers for different software versions

    with the response

    Dealing with COM (Component Object Model) servers for different software versions can be a complex task, especially when you need to ensure compatibility across different versions of software. Here are some general steps and considerations for managing COM servers in such scenarios:

    Documentation and Versioning:

    Understand Versioning: Begin by thoroughly understanding the versioning system of the COM server you are working with. COM servers typically have version numbers associated with them.

    Check Documentation: Consult the documentation provided by the software vendor to see if they offer guidance on working with different versions of their COM servers.

    Check for Compatibility:

    Compatibility Tables: Some software vendors provide compatibility tables that show which versions of their COM servers are compatible with specific versions of client applications. Consult these tables to determine which version of the COM server is suitable for your needs.

    Compatibility Testing: If there are no official compatibility tables, you may need to perform compatibility testing to ensure that your client application works with different versions of the COM server.

    Version Detection:

    Runtime Version Detection: Implement a mechanism within your client application to detect the version of the COM server that is currently installed on the system. This can be done by examining the server’s registry keys or other available metadata.

    User Configuration: If possible, allow users to configure which version of the COM server your application should use. This can be helpful when dealing with environments where multiple versions of the COM server are installed.

    Conditional Loading:

    Conditional Loading: Based on the detected or user-configured version, load the appropriate COM server dynamically at runtime. This may involve using conditional statements or configuration files to specify the COM server’s CLSID (Class Identifier) or ProgID (Programmatic Identifier) for the desired version.
    Error Handling:

    Graceful Handling: Implement robust error handling to deal with situations where the requested version of the COM server is not available or fails to load.
    Fallback Mechanism:

    Fallback Mechanism: Consider implementing a fallback mechanism that allows your application to use a compatible version of the COM server when the preferred version is unavailable.
    Version-Specific Code:

    Conditional Code: If necessary, write version-specific code sections that accommodate differences in functionality or behavior between different versions of the COM server.
    Communication and Collaboration:

    Vendor Collaboration: If you encounter difficulties, consider reaching out to the software vendor for guidance or assistance in handling multiple versions of their COM servers.
    Documentation and Support:

    Document Your Approach: Document your strategy for managing different versions of COM servers in your application. This documentation can be valuable for future development and troubleshooting.
    Testing:

    Thorough Testing: Conduct thorough testing to ensure that your application works correctly with different versions of the COM server. This includes testing edge cases and scenarios specific to each version.
    Remember that dealing with multiple versions of COM servers can be challenging, and it’s essential to carefully plan and test your implementation to ensure reliability and compatibility across different software versions.

    though it doesn’t quite get to the nub of what I wanted it seems to show a remarkable “understanding” of the abstract issues involved. Though it’s difficult to know if this insight is pulled directly from somewhere specific :). Oh the joys of sunday afternoons 😀

    thecaptain
    Free Member

    Now try asking it if there are any African countries beginning with K.

    Better still, don’t ask ChatGPT, do a google search with that query.

    zilog6128
    Full Member

    Big update yesterday (paid version only!) mainly focused around the APIs and using the technologies in your own apps etc & otherwise outwith the main chat web page. Now “knows” stuff up to April of this year so much more up-to-date  

    New version of CGPT4 – it’s now integrated with their generative image software (Dall.E) which is now on v3 (i think this is the only way to access the latest version now?) and an order of magnitude better than previously, and probably better than any other generative image AI I’ve tried yet. 

    You can also now give it visual input as well as text prompts, which unlocks a whole new level of potential. It can perform a lot of useful tasks without having to specifically train your own ML model. For example, with no learning input from me, it can differentiate between the various delivery vehicles visible on our doorbell cam (I.e. Amazon truck, Ocado etc). I was also able to very quickly get it to tell our various cats apart and even the two different hedgehogs which have been visiting our hedgehog house and obviously are visually extremely similar!

    oldnpastit
    Full Member

    Github seem to be pretty full-on with it:

    Universe 2023: Copilot transforms GitHub into the AI-powered developer platform

    In March, we shared our vision of a new future of software development with Copilot X, where AI infuses every step of the developer lifecycle. Since then, we’ve been working to scale and mature the underlying technology–and in the process we created something even bigger. Our vision has manifested itself into a new reality for the world’s developers.

    My employer has signed up for copilot (for an eye-watering amount of money).

    At some point I’m going to have to figure out how to fire up the vim/neovim plugin for it (I’m way too old and stupid to learn how to use a JetBrains IDE, whatever that is, at this point in my life).

    2
    weeksy
    Full Member

    It can perform a lot of useful tasks without having to specifically train your own ML model. For example, with no learning input from me, it can differentiate between the various delivery vehicles visible on our doorbell cam (I.e. Amazon truck, Ocado etc). I was also able to very quickly get it to tell our various cats apart and even the two different hedgehogs which have been visiting our hedgehog house and obviously are visually extremely similar!

    And the useful things?

    zilog6128
    Full Member

    And the useful things?

    limited only by your imagination and/or brainpower! (seem to be lacking for a lot of people though 😉)

    CountZero
    Full Member

    However, we all know the money is going to be handed to some friends of Rishi Sunak and Michelle Donelan.

    She’s my MP, and I believe she actually lives in Chippenham, so come the next election it’ll be interesting to see how many ‘friends’ she does have. 🤔

    CountZero
    Full Member

    you’ve missed the point though of why people are worried about “hyper-intelligent” AI – you don’t program it, or have any say about how it works or what results it comes up with. It’s machine-learning, it programs itself. Hence the scenario where the AI snowballs and starts doing all kinds of things that weren’t envisaged. There’s a documentary called The Terminator you might want to check out

    SkyNet’s headquarters are about five miles from me, I might go round and ask them if they’re integrating AI into their systems yet.

    I’m convinced that various trolls that keep popping up on here are just aspects of an AI that’s just taking the piss.

    Klunk
    Free Member

    think I’ve just come across some AI generated forum spam, got the context spot on though I doubt there’s many people having difficult with their 3d packages scripting and sdk have much use for stainless steel flanges from India’s leading steel supplier 😀

    johnx2
    Free Member

    (Copyright: ChatGPT, 2023)

    I knew it. You’re all bots.

    my understanding of the models behind it (basically beefed up multi-variant analysis) i

    It’s more than beefed up neural networks. Not many people who say they understand understand. And of the people who really should understand how these models work having been involved in designing them a lot say they don’t understand quite how they work.

    And even when they do understand broadly how their creations work, they’re not always sure how they do what they do. Interpretation of medical imaging is mentioned above and for some time now AI has outperformed all but the best experts. But it’s about five years or so ago now that deepmind coauthored a paper on determining sex differences from retinal vascularisation.

    By looking at photos of capilliaries at the back of the eye, it can tell who’s a man and who’s a woman to a high degree of reliability, and no one has (or had, I’m out of date but makes no difference to my point) any idea how the tech was doing this. That’s the point, if it had been factor analysis then regression or whatever it could tell us what it was doing. But it’s not.

    Again I could be wrong about this, but the new strategies AI has come up with for playing Go have been understood by human expert observers, rather than summarised by the tech itself ala “yeah, this is what I’m doing and this is how I came up with it”. It’s this opacity which is intriguing and also a little concerning.

    zilog6128
    Full Member

    It’s more than beefed up neural networks. Not many people who say they understand understand.

    I don’t pretend to understand it myself. But I’m pretty sure that anyone who dismisses it with “It’s just…” or “Basically…” hasn’t got a **** clue either. 😂

    dakuan
    Free Member

    The wheels have completely fallen off the GPT bus over the last few hours. Silicon valley wunderkind and OpenAI CEO Sam Altman has been fired by the board. Supposedly for telling them porkies. All very strange, with the incredible numbers OpenAI has been kicking out, you wonder what on earth he has to be fibbing about. The CTO (a ‘proper’ CTO too, one that manages both product and engineering) is taking over as CEO. They had to know everything that Sam was up to, suggesting they might have been a whistleblower / main coup plotter.

    Other departures, Greg Brokman, cofounder and president, 3 senior researchers, including the head of risk.

    Rumours at the moment:

    – they were losing way more money running GPT than anyone expected
    – Sam was running a secret squirrel department, rushing towards AGI whilist telling board + public that its only Chatbots and they are very responsible really
    – Some board members v unhappy with the pivot from ‘non profit research org’ to ‘megabucks take over the world corp’

    Klunk
    Free Member
Viewing 40 posts - 241 through 280 (of 339 total)

You must be logged in to reply to this topic.