This conversation and the reactions it caused made me think of a few tips to explicitly veer away from AI-aided dystopias in your fictional universe.

Avoid a monolithic centralized statist super-AI

I guess ChatGPT is the model people use, the idea that there is a supercomputer managing all aspects of a community. And people are understandably wary of a single point of control that could too easily lead to totalitarianism

Instead, have a multitude of transparent local agents managing different systems. Each with a different algorithm and “personality”.

Talk about open source

The most used AI models today are open source. We have a media that is biased towards thinking that things that do not generate commercial transactions are not important yet I am willing to bet that more tokens are generated by all the free models in the world than by OpenAI and its commercial competitors.

AIs are not to be produced by opaque companies from their ivory towers. They are the result of researchers and engineers who have a passion for designing smart system and --a fact that is too often obscured by the sad state of our society where you often have to join a company to make a living-- they do it with a genuine concern for humanity’s well being and a desire that this work is used for the greater good.

It is among AI engineers that you will find the most paranoids about AI safety and safeguards. In a solarpunk future, this is a public debate and a political subject that is an important part of the policy discussion: We make models together, with incentives that are collectively agreed upon.

AIs are personal

You don’t need a supercomputer to run an AI. LLMs today run on relatively modest gaming devices, even on raspberry pi! (though slowly at the moment). Energy-efficient chips are currently being designed to make the barrier of entry even lower.

It is a very safe bet to say that in the future, every person will have their own intelligent agent managing their local devices. Or even one agent per device and an orchestrator on their smartphone. And it is important that they are in complete control of these.

AIs should enhance humans control over their own devices, not make them surrender it.

AIs as enablers of democracy

You not only use your pocket AI to control your dishwasher, it is also your personal lawyer and representative. No human has the bandwidth to go through all the current policy debates happening in a typical country or even local community. But a well designed agent that spends time discussing with you will know your preferences and make sure to represent them.

It can engage in discussions with other agents to find compromises, to propose or oppose initiative.

As everyone’s opinion is now included in every decision about road planning, public transportation, construction schedules and urban development, the general landscape will organically grow friendlier for everybody.

  • max@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    nice ! ive been a bit wary of llms cuz of electricity usage n environmental impact. iz there any things u can point me to for running a greener ai myself?

    • keepthepace@slrpnk.netOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      10 months ago

      There is IMHO a very counter-productive dynamics arising around the debate about the environmental impact of IT. We mostly hear luddites and techbros argue in bad faith over invented numbers. I would urge everyone involved in this debate to first make sure that the numbers used are correct.

      Using a LLM with today tech (which are not yet really optimized for it) is akin to running a 3D videogame with good graphics setting: it uses the GPU quite a bit but only when you do run queries in the model, which may be infrequent. When it runs at full my GPU takes 170W. Add probably 200W for the rest of the computer. I do know that it is really not my primary emission cause, especially living in France where CO2/kWh is pretty low.

      A greener AI would be one that frees my time to work on home insulation or in convincing people to switch to heat pumps. At one point I’ll probably install solar panels and home batteries. 400W is a relatively easy target to reach. The water heater and cooking devices use more than that.

      The debate is more about the cost of training models, which use datacenters at full capacity for days or even months for the biggest ones. Thing is, many people confuse the training with the use. Training has to be done once. Well, once per model, which is why open source models are so crucial: if you have a thousand companies training their own proprietary model, it wastes a lot of energy but if instead they use a shared trained model and maybe just fine tune it a bit for a few hours, it really decreases the amount of energy used.

      Also, many datacenters have been greenwashing a lot, claiming to have decreased tenfolds their environmental impact or even offset it totally. This is greenwashing not because it is false, but because the intent is much straightforward: electricity is a big part of their costs, cutting it down is just good business sense.

      It has become customary for big models to publish the energy used and an estimate of the CO2 emitted in the process. Llama2, possibly the biggest open model trained so far, emitted about 1000 t of CO2 equivalent. It sounds like a lot but this is equivalent to one international 10h commercial flight and it fed the open source community for more than a year. Any AI conference would emit more. And unlike flights, it does not have to emit CO2: it uses electricity that can be sustainably produced.

      I tend to veer a bit on the techbro side: when you look at the actual numbers, and the actual possibilities, the emissions are not problematic, they are useful uses of electricity that are included in the debate over a sustainable electricity grid.

      • max@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        thanks for the reply !! i have an old “gaming”" laptop (that i just use for most things tbh) but its 150W amd got a GTX 1060m in it, so i would be curious to see if i could run some sort of local ai on it. i believe the CO_2/kWh is pretty good here we rely mostly on renewables, but still have some coal power stations around -_- ah ! if training the ai is the hard part and running it is easier on energy then it may be better than i thought, though i am wary of how it gets used by companies… i am very aware though going into a engineering field in the next couple years that llms are gonna be a big tool to use, though i dont agree with stuff like chatgpt for privacy reasons, but i was unsure about how bad the environmental impact would be if i tried it, thanks for the info !! :3

        • keepthepace@slrpnk.netOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          3 months ago

          The important spec for being able to run good model is the VRAM. Yours with 6GB is a bit in the low range but I guess it could run a 7B model qith heavy quantization?

          i am wary of how it gets used by companies

          Me too, that’s why I feel it is important that people use these models without relying on companies.