#nlp #ai #language #computer #natural #control #george

george-ai

George is an API leveraging AI to make it easy to control a computer with natural language

1 unstable release

0.1.0 Nov 29, 2024

#346 in Machine learning

Download history 114/week @ 2024-11-29 7/week @ 2024-12-06

121 downloads per month

MIT license

38KB
693 lines

George

George is an API leveraging AI to make it easy to control a computer with natural language.

Unlike traditional frameworks which rely on predefined static selectors, this API uses AI vision to interpret the screen. This makes it more resilient to UI changes and able to automate interfaces that traditional tools can't handle.

https://github.com/user-attachments/assets/534bfcf8-13c6-45cf-83b3-98804f9aa432

Example

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let mut george = George::new("https://your-molmo-llm.com");
    george.start().await?;
    george.open_chrome("https://some-website.com").await?;
    george.click("sign in link").await?;
    george.fill_in("input Email text field", "your@email.com").await?;
    george.fill_in("input Password text field", "super-secret").await?;
    george.click("sign in button").await?;
    george.close_chrome().await?;
    george.stop().await?;
}

Getting Started

Prerequisites

Setting up Molmo

George uses Molmo, a vision-based LLM, to identify UI elements by converting natural language descriptions into screen coordinates which are then used to execute computer interactions.

You can try the online Molmo demo and ask for the point coordinates of an element in an image.

Docker

To run Molmo within Docker, you can use the following command which requires a 24GB VRAM GPU:

docker run -d --name molmo_container --runtime=nvidia --gpus all \
  -v ~/.cache/huggingface:/root/.cache/huggingface \
  -p 8000:8000 \
  --ipc=host \
  vllm/vllm-openai:latest \
  --model allenai/Molmo-7B-D-0924 \
  --trust-remote-code

See this script to easily install Docker with Nvidia support on Ubuntu

Bare Metal

Alternatively, you can run Molmo on bare metal, which can reduce the GPU memory consumption down to ~18GB or even ~12GB by leveraging bitsandbytes. Here are some example projects:

  • Molmo example server
  • Modified Molmo Python server with bitsandbytes option

Cloud

You can run Molmo with Runpod.io via their vllm pod template. See the video below for a demo (youtube):

https://github.com/user-attachments/assets/8c38169c-bc54-4128-a409-985ef4a2c1de

template override:

--host 0.0.0.0 --port 8000 --model allenai/Molmo-7B-D-0924 --trust-remote-code --api-key your-api-key

Roadmap

  • Create a UI to help build out the selectors. It can be time-consuming to come up with an accurate selector.
  • Improve debugging and logging
  • Create bindings for other languages
    • Ruby
    • Python
    • JavaScript/Typescript
    • Others?

Why the name George?

This is George. Most of the time he does what he's supposed to, but sometimes he doesn't do the right thing at all. He's a living embodiment of current AI expectations. dog

Dependencies

~19–34MB
~537K SLoC