• ribboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    30
    ·
    2 years ago

    It’s rather interesting here that the board, consisting of a fairly strong scientific presence, and not so much a commercial one, is getting such hate.

    People are quick to jump on for profit companies that do everything in their power to earn a buck. Well, here you have a company that fires their CEO for going too much in the direction of earning money.

    Yet every one is all up in arms over it. We can’t have the cake and eat it folks.

    • PersnickityPenguin@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 years ago

      Sounds like the workers all want to end up with highly valued stocks when it goes IPO. Which is, and I’m just guessing here, the only reason anyone is doing AI right now.

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      Well, here you have a company that fires their CEO for going too much in the direction of earning money.

      I think this is very much in question by the people who are up in arms

      • ribboo@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 years ago

        Altman went to Microsoft within 48 hours, does anything else really need to be said? Add to that, the fact that basically every news outlet has reported - with difference sources - that he was pushing in exactly in that way. There’s very little to support the fact that reality is different.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      This was my first thought… But then why are the employees taking a stand against it?

      There’s got to be more to this story

    • TurtleJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      It’s my opinion that every single person in the upper levels is this organization is a maniac. They are all a bunch of so-called “rationalist” tech-right AnCaps that justify their immense incomes through the lens of Effective Altruism, the same ideology that Sam Bankman-fried used to justify his theft of billions from his customers.

      Anybody with the urge to pick a “side” here ought to think about taking a step back and reconsider; they are all bad people.

    • Rooskie91@discuss.online
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      I’m sure some amount of the negative press is propaganda from corporations who would like to profit from using AI and are prevented from doing so by OpenAI’s model some how.

  • Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    2 years ago

    You’re not going to develop AI for the benefit of humanity at Microsoft. If they go there, we’ll know "Open"AI’s mission was all a lie.

    • Gork@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 years ago

      Yeah Microsoft is definitely not going to be benevolent. But I saw this as a foregone conclusion since AI is so disruptive that heavy commercialization is inevitable.

      We likely won’t have free access like we do now and it will be enshittified like everything else now and we’ll need to pay yet another subscription to even access it.

      • extant@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        We only have free access now because it’s still in development and they’re using our interactions to train from, but when they are on more solid ground I fully expect enshittification.

    • sab@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 years ago

      And if they don’t, we’re supposed to keep on believing all of this is somehow benefiting us?

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        The way I understand it, Microsoft gave OpenAI $10 billion, but they didn’t get any votes. They had no say in their matters.

        • Alto@kbin.social
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          2 years ago

          On paper, sure. They gave them $10B. They absolutely have some sort of voice here

  • conditional_soup@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 years ago

    I’d like to know why exactly the board fired Altman before I pass judgment one way or the other, especially given the mad rush by the investor class to re-instate him. It makes me especially curious that the employees are sticking up for him. My initial intuition was that MSFT convinced Altman to cross bridges that he shouldn’t have (for $$$$), but I doubt that a little more now that the employees are sticking up for him. Something fucking weird is going on, and I’m dying to know what it is.

    • los_chill@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 years ago

      Altman wanted profit. Board prioritized (rightfully, and to their mission) responsible, non-profit care of AI. Employees now side with Altman out of greed and view the board as denying them their mega payday. Microsoft dangling jobs for employees wanting to jump ship and make as much money possible. This whole thing seems pretty simple: greed (Altman, Microsoft, employees) vs the original non-profit mission (the board).

      Edit: spelling

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 years ago

        That’s what I thought it was at first too. But regular employees aren’t usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

        But do they know things we don’t know? They certainly might. Or it might just be bandwagoning or the likes.

        • los_chill@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 years ago

          But regular employees aren’t usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

          I would have thought so too of the employees, but threatening a move to Microsoft kinda says the opposite. That or they are just all-in on Altman as a person.

      • Melt@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 years ago

        The tone of the blog post is so amateurish I feel like I’m reading a reddit post on r/Cryptocurrency

      • conditional_soup@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 years ago

        Thanks for sharing. That is… Weird in ways I didn’t anticipate. “Weird cult of pseudointellectuals upending the biggest name in silicon valley” wasn’t on my bingo board.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          2 years ago

          IMO there are some good reasons to be concerned about AI, but those reasons are along the lines of “it’s going to be massively disruptive to the economy and we need to prepare for that to ensure it’s a net positive”, not “it’s going to take over our minds and turn us into paperclips.”

      • Bal@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        I don’t know a lot about the background but this article feels super biased against one side.

      • Coasting0942@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 years ago

        Can somebody explain the following quote in the article for me please?

        Rationalists’ chronic inability to talk like regular humans may even explain the statement calling Altman a liar.

        • vanquesse@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          Imagine “roko’s basilisk”, but extended into an entire philosophy. It’s the idea that “we” need to anything and everything to create the inevitable ultimate super-ai, as fast as possible. Climate change, wars, exploitation, suffering? None of that matters compared to the benefits humanity stands to gain when the ultimate super-ai goes online

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      I don’t think msft convinced him with money, but rather opportunity. He clearly still wants to work with AI and 2nd best place for that after openAI is Microsoft

      • SnipingNinja@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 years ago

        Second best would be Google, but for him it’s Microsoft because he’s probably getting a sweetheart deal as being in control of his destiny (not really, but at least for a short while)

        • morrowind@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 years ago

          Microsoft has access to a lot of OpenAI’s code, weights etc. and he’s already been working with them. It would be much better for him than to join some other company he has no experience with.

          • SnipingNinja@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 years ago

            He’s not the guy who writes code, he’s a VC or management guy. You might say he has good ideas, as ChatGPT interface is attributed to him, but he didn’t make it.

  • Marxism-Fennekinism@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 years ago

    https://time.com/6247678/openai-chatgpt-kenya-workers/

    To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

    OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.

    The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance. For this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project. All the employees spoke on condition of anonymity out of concern for their livelihoods.

    […]

    That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

    Gonna leave this here.

        • reksas@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          This is actually extremely critical work, if results are going to be used by ai’s that are going to be used widely. This essentially determines the “moral compass” of the ai.

          Imagine if some big corporation did the labeling and such, trained some huge ai with that data and it became widely used. Then years pass and eventually ai develops to such extent it can be reliably be used to replace entire upper management. Suddenly becoming slave for “evil” ai overlord is starting to move from being beyond crazy idea to plausible(years and years in future, not now obviously).

          • ColdFenix@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 years ago

            Extremely critical but mostly done by underpaid workers in poor countries who have to look at the most horrific stuff imaginable and develop lifelong trauma because it’s the only job available and otherwise they and their family might starve. Source This is one of the main reasons I have little hope that if OpenAI actually manages to create an AGI that it will operate in an ethical way. How could it if the people trying to instill morality into it are so lacking in it themselves.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        Insert meme of “guy shooting a talk-show guest in his chair, then turning to the camera and asking ‘why has the OpenAI board done this?’” Here, I guess.

    • 4L3moNemo@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      An odd error for the company, indeed. • 505 HTTP Version Not Supported

      Just one vote missing till the • 506 Variant Also Negotiates

      Guess, they are stuck now. :D