Bug description When trying to connect to LM studio on http://localhost:1234 for a chat completion request the program hangs, if you exit from LMStudio Spring AI complains that to bytes are received, which makes me thinks that it is waiting for a response. Please note that using the official OpenAI endpoint works correctly

I tried to send a chat completion request using curl and it works after loading the model with lms load

curl http://localhost:1234/v1/chat/completions ^ More? -H "Content-Type: application/json" ^ More? -H "Authorization: Bearer YOUR_OPENAI_API_KEY" ^ More? -d "{ \"model\": \"llama-3.2-1b-instruct\", \"messages\": [ { \"role\": \"user\", \"content\": \"hello\" } ], \"temperature\": 0.7 }" { "id": "chatcmpl-qjmrt2dzv5plaf7w22f8p8", "object": "chat.completion", "created": 1741679687, "model": "llama-3.2-1b-instruct", "choices": [ { "index": 0, "logprobs": null, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! How can I assist you today?" } } ], "usage": { "prompt_tokens": 11, "completion_tokens": 9, "total_tokens": 20 }, "system_fingerprint": "llama-3.2-1b-instruct" }

Environment I am using Windows 11 24H2, Java 23.0 (amazon corretto), spring AI M6 (but M4 shows the same behaviour) and LM Studio 0.3.8 (but 0.3.12 shows the same issue) I tried with both CORS enabled or disabled, and both serving the LMStudio on the local network and not serving it on the local network, it made no difference.

Steps to reproduce in the attached zip file there's a minimal spring application built with spring initializr that incorporates Spring AI M6 with the OpenAI module Just run SpringBootApplication.java Expected behavior Main.java is a CommandLineRunner where I build a ChatClient and send the prompt hello to the model, I expect the program to exit once the generation is complete but the program hangs forever.

Minimal Complete Reproducible example See the attached zip file demo.zip

Comment From: cjjava

I also tried to connect LM studio through Spring AI today, but it is always in a suspended state.

Comment From: eugene-kamenev

Check if LM Studio supports Http2, if not, you can setup some proxy befind

Comment From: eugene-kamenev

Today I tried to do the same and got suspended state. As I said above, seems like some issue with Http2 connections handling in LM Studio. Here is what I did to fix:

Create a file docker-compose.yaml

version: '3.8'

services:
  nginx:
    image: nginx:latest
    container_name: nginx_proxy
    ports:
      - "8080:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro

Create a file: nginx.conf

events {
    worker_connections 128;
}

http {
    server {
        listen 80 http2;
        server_name localhost;

        location / {
            proxy_pass http://backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto http;
        }
    }

    upstream backend {
        server 192.168.128.1:11434;  # Replace with your target LM Studio host
    }
}

After running this, connection to LM Studio works fine.

Comment From: DannySortino

Was trying the same thing for past few hours and wracking my brain with the issue. It defintly seems http/2 related....

To make it work without needing a proxy etc, i just changed

 OpenAiApi openAiApi = OpenAiApi.builder()
          .baseUrl("http://localhost:1234")
          .apiKey(new NoopApiKey())
          .build();

to

 OpenAiApi openAiApi = OpenAiApi.builder()
          .baseUrl("http://localhost:1234")
          .apiKey(new NoopApiKey())
          .webClientBuilder(WebClient.builder()
                  // Force HTTP/1.1 for streaming
                  .clientConnector(new JdkClientHttpConnector(HttpClient.newBuilder()
                          .version(HttpClient.Version.HTTP_1_1)
                          .connectTimeout(Duration.ofSeconds(30))
                          .build())))
          .restClientBuilder(RestClient.builder()
                  // Force HTTP/1.1 for non-streaming
                  .requestFactory(new JdkClientHttpRequestFactory(HttpClient.newBuilder()
                          .version(HttpClient.Version.HTTP_1_1)
                          .connectTimeout(Duration.ofSeconds(30))
                          .build())))
          .build();

Comment From: ZimaBlueee

Was trying the same thing for past few hours and wracking my brain with the issue. It defintly seems http/2 related....过去几个小时我一直在尝试同样的方法,并为此问题绞尽脑汁。这肯定与 http/2 有关……

To make it work without needing a proxy etc, i just changed为了让它工作而不需要代理等,我只是改变了

OpenAiApi openAiApi = OpenAiApi.builder() .baseUrl("http://localhost:1234") .apiKey(new NoopApiKey()) .build(); to  到

OpenAiApi openAiApi = OpenAiApi.builder() .baseUrl("http://localhost:1234") .apiKey(new NoopApiKey()) .webClientBuilder(WebClient.builder() // Force HTTP/1.1 for streaming .clientConnector(new JdkClientHttpConnector(HttpClient.newBuilder() .version(HttpClient.Version.HTTP_1_1) .connectTimeout(Duration.ofSeconds(30)) .build()))) .restClientBuilder(RestClient.builder() // Force HTTP/1.1 for non-streaming .requestFactory(new JdkClientHttpRequestFactory(HttpClient.newBuilder() .version(HttpClient.Version.HTTP_1_1) .connectTimeout(Duration.ofSeconds(30)) .build()))) .build();

it works!

Comment From: anthonyikeda

Today I tried to do the same and got suspended state. As I said above, seems like some issue with Http2 connections handling in LM Studio. Here is what I did to fix:

Create a file docker-compose.yaml

version: '3.8'

services: nginx: image: nginx:latest container_name: nginx_proxy ports: - "8080:80" volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro Create a file: nginx.conf

``` events { worker_connections 128; }

http { server { listen 80 http2; server_name localhost;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto http;
    }
}

upstream backend {
    server 192.168.128.1:11434;  # Replace with your target LM Studio host
}

} ```

After running this, connection to LM Studio works fine.

Not exactly ideal but this works well - you can also replace the upstream server with your 127.0.0.1 ip address (since I think LM Studio needs top be configured to advertise on 0.0.0.0 for your en0 ip.