クライアントからプロキシに送るJSONに language というキーを含めます。プロキシはこの language キーの値を取り出し、 x-language というカスタムヘッダにセットして、ALBにリクエストを転送します。ALBは x-language の値に応じて固定レスポンスを返し、それがプロキシ経由でクライアントに送り返されます。
昨年のブログで、「APNパートナーじゃなくなった」「表彰されている人たちが羨ましい」と書いたのですが、その後よくよく確認したら、自分の所属企業も応募できることが判明しました。そこで今年は改めて 2025 Japan All AWS Certifications Engineers に応募し、表彰いただくことが出来ました。現職の同僚複数名と一緒に受賞できたことも嬉しかったです。
一方で、2025 Japan AWS Top Engineers にも初めて応募してみたのですが、こちらは残念ながら受賞できませんでした。今年は活動の幅も深さも拡げられるように精進したいと思います。
"""OpenAI model provider.- Docs: https://platform.openai.com/docs/overview"""import logging
from typing import Any, AsyncGenerator, Optional, Protocol, Type, TypedDict, TypeVar, Union, cast
import openai
from openai.types.chat.parsed_chat_completion import ParsedChatCompletion
from pydantic import BaseModel
from typing_extensions import Unpack, override
from ..types.content import Messages
from ..types.models import OpenAIModel as SAOpenAIModel
logger = logging.getLogger(__name__)
T = TypeVar("T", bound=BaseModel)
classClient(Protocol):
"""Protocol defining the OpenAI-compatible interface for the underlying provider client."""@property# pragma: no coverdefchat(self) -> Any:
"""Chat completions interface."""
...
classOpenAIModel(SAOpenAIModel):
"""OpenAI model provider implementation."""
client: Client
classOpenAIConfig(TypedDict, total=False):
"""Configuration options for OpenAI models. Attributes: model_id: Model ID (e.g., "gpt-4o"). For a complete list of supported models, see https://platform.openai.com/docs/models. params: Model parameters (e.g., max_tokens). For a complete list of supported parameters, see https://platform.openai.com/docs/api-reference/chat/create. """
model_id: str
params: Optional[dict[str, Any]]
def__init__(self, client_args: Optional[dict[str, Any]] = None, **model_config: Unpack[OpenAIConfig]) -> None:
"""Initialize provider instance. Args: client_args: Arguments for the OpenAI client. For a complete list of supported arguments, see https://pypi.org/project/openai/. **model_config: Configuration options for the OpenAI model. """
self.config = dict(model_config)
logger.debug("config=<%s> | initializing", self.config)
client_args = client_args or {}
#self.client = openai.OpenAI(**client_args) #変更前
self.client = openai.AzureOpenAI(**client_args) #変更後@overridedefupdate_config(self, **model_config: Unpack[OpenAIConfig]) -> None: # type: ignore[override]"""Update the OpenAI model configuration with the provided arguments. Args: **model_config: Configuration overrides. """
self.config.update(model_config)
@overridedefget_config(self) -> OpenAIConfig:
"""Get the OpenAI model configuration. Returns: The OpenAI model configuration. """return cast(OpenAIModel.OpenAIConfig, self.config)
@overrideasyncdefstream(self, request: dict[str, Any]) -> AsyncGenerator[dict[str, Any], None]:
"""Send the request to the OpenAI model and get the streaming response. Args: request: The formatted request to send to the OpenAI model. Returns: An iterable of response events from the OpenAI model. """
response = self.client.chat.completions.create(**request)
yield {"chunk_type": "message_start"}
yield {"chunk_type": "content_start", "data_type": "text"}
tool_calls: dict[int, list[Any]] = {}
for event in response:
# Defensive: skip events with empty or missing choicesifnotgetattr(event, "choices", None):
continue
choice = event.choices[0]
if choice.delta.content:
yield {"chunk_type": "content_delta", "data_type": "text", "data": choice.delta.content}
ifhasattr(choice.delta, "reasoning_content") and choice.delta.reasoning_content:
yield {
"chunk_type": "content_delta",
"data_type": "reasoning_content",
"data": choice.delta.reasoning_content,
}
for tool_call in choice.delta.tool_calls or []:
tool_calls.setdefault(tool_call.index, []).append(tool_call)
if choice.finish_reason:
breakyield {"chunk_type": "content_stop", "data_type": "text"}
for tool_deltas in tool_calls.values():
yield {"chunk_type": "content_start", "data_type": "tool", "data": tool_deltas[0]}
for tool_delta in tool_deltas:
yield {"chunk_type": "content_delta", "data_type": "tool", "data": tool_delta}
yield {"chunk_type": "content_stop", "data_type": "tool"}
yield {"chunk_type": "message_stop", "data": choice.finish_reason}
# Skip remaining events as we don't have use for anything except the final usage payloadfor event in response:
_ = event
yield {"chunk_type": "metadata", "data": event.usage}
@overrideasyncdefstructured_output(
self, output_model: Type[T], prompt: Messages
) -> AsyncGenerator[dict[str, Union[T, Any]], None]:
"""Get structured output from the model. Args: output_model: The output model to use for the agent. prompt: The prompt messages to use for the agent. Yields: Model events with the last being the structured output. """
response: ParsedChatCompletion = self.client.beta.chat.completions.parse( # type: ignore
model=self.get_config()["model_id"],
messages=super().format_request(prompt)["messages"],
response_format=output_model,
)
parsed: T | None = None# Find the first choice with tool_callsiflen(response.choices) > 1:
raiseValueError("Multiple choices found in the OpenAI response.")
for choice in response.choices:
ifisinstance(choice.message.parsed, output_model):
parsed = choice.message.parsed
breakif parsed:
yield {"output": parsed}
else:
raiseValueError("No valid tool use or tool use input was found in the OpenAI response.")
「これはすんなりいけるか!?」と期待に胸を膨らませて内容を流し読みしてみました。
サンプルコードと、To connect to a custom OpenAI-compatible server, you will pass in its base_url into the client_args:の注意書きがあったため、base_urlに AzureのエンドポイントURLを指定してみました。
失敗コード(クリック/タップすると展開されます)
from strands import Agent
from strands.models.openai import OpenAIModel
from strands_tools import calculator
model = OpenAIModel(
client_args={
"api_key": "xxxxxxxx",
"base_url": "https://xxxxxxxx.openai.azure.com/",
},
# **model_config
model_id="gpt-4o",
params={
"max_tokens": 1000,
"temperature": 0.7,
}
)
agent = Agent(model=model, tools=[calculator])
message = """Tell me about 1&2.1. Calculate 23456 * 98762. Your LLM model. Name and Release date and feature.回答は日本語で出力してください。"""
agent(message)
とりあえずこれで試しに実行してみましたが、Resource not foundというエラーが出てしまいました。
[ssm-user@ip-10-0-0-29 openvpn]$ nslookup internal-OnPremALB-1015627341.ap-northeast-1.elb.amazonaws.com
;; communications error to 172.16.0.2#53: timed out
;; communications error to 172.16.0.2#53: timed out
;; communications error to 172.16.0.2#53: timed out
Server: 10.0.0.2
Address: 10.0.0.2#53
Non-authoritative answer:
Name: internal-OnPremALB-1015627341.ap-northeast-1.elb.amazonaws.com
Address: 10.0.0.61
Name: internal-OnPremALB-1015627341.ap-northeast-1.elb.amazonaws.com
Address: 10.0.0.77;; communications error to 172.16.0.2#53: timed out
;; communications error to 172.16.0.2#53: timed out
;; communications error to 172.16.0.2#53: timed out
[ssm-user@ip-10-0-0-29 openvpn]$ nslookup internal-ClientVPNALB-1301670922.ap-northeast-1.elb.amazonaws.com
;; communications error to 172.16.0.2#53: timed out
;; communications error to 172.16.0.2#53: timed out
;; communications error to 172.16.0.2#53: timed out
Server: 10.0.0.2
Address: 10.0.0.2#53
Non-authoritative answer:
Name: internal-ClientVPNALB-1301670922.ap-northeast-1.elb.amazonaws.com
Address: 172.16.0.38
Name: internal-ClientVPNALB-1301670922.ap-northeast-1.elb.amazonaws.com
Address: 172.16.0.75;; communications error to 172.16.0.2#53: timed out
;; communications error to 172.16.0.2#53: timed out
;; communications error to 172.16.0.2#53: timed out
[ssm-user@ip-10-0-0-29 openvpn]$
[ssm-user@ip-10-0-0-29 openvpn]$ curl http://internal-clientvpnalb-1301670922.ap-northeast-1.elb.amazonaws.com
This is Client VPN side ALB.
[ssm-user@ip-10-0-0-29 openvpn]$
[ssm-user@ip-10-0-0-29 openvpn]$
[ssm-user@ip-10-0-0-29 openvpn]$ curl http://internal-onpremalb-1015627341.ap-northeast-1.elb.amazonaws.com
This is on-premises side ALB.
[ssm-user@ip-10-0-0-29 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$ ip route
default via 10.0.0.1 dev ens5 proto dhcp src 10.0.0.26 metric 51210.0.0.0/27 dev ens5 proto kernel scope link src 10.0.0.26 metric 51210.0.0.1 dev ens5 proto dhcp scope link src 10.0.0.26 metric 512169.254.169.253 via 10.0.0.1 dev ens5 proto dhcp src 10.0.0.26 metric 512172.16.0.0/24 via 192.168.0.161 dev tun0
192.168.0.160/27 dev tun0 proto kernel scope link src 192.168.0.162[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default ip-10-0-0-1.ap- 0.0.0.0 UG 51200 ens5
ip-10-0-0-0.ap- 0.0.0.0255.255.255.224 U 51200 ens5
ip-10-0-0-1.ap- 0.0.0.0255.255.255.255 UH 51200 ens5
169.254.169.253 ip-10-0-0-1.ap- 255.255.255.255 UGH 51200 ens5
ip-172-16-0-0.a ip-192-168-0-16 255.255.255.0 UG 000 tun0
ip-192-168-0-16 0.0.0.0255.255.255.224 U 000 tun0
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$ sudo vim /etc/systemd/resolved.conf
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$ sudo cat /etc/systemd/resolved.conf
# This file is part of systemd.## systemd is free software; you can redistribute it and/or modify it under the# terms of the GNU Lesser General Public License as published by the Free# Software Foundation; either version 2.1 of the License, or (at your option)# any later version.## Entries in this file show the compile time defaults. Local configuration# should be created by either modifying this file, or by creating "drop-ins" in# the resolved.conf.d/ subdirectory. The latter is generally recommended.# Defaults can be restored by simply deleting this file and all drop-ins.## Use 'systemd-analyze cat-config systemd/resolved.conf' to display the full config.## See resolved.conf(5) for details.[Resolve]# Some examples of DNS servers which may be used for DNS= and FallbackDNS=:# Cloudflare: 1.1.1.1#cloudflare-dns.com 1.0.0.1#cloudflare-dns.com 2606:4700:4700::1111#cloudflare-dns.com 2606:4700:4700::1001#cloudflare-dns.com# Google: 8.8.8.8#dns.google 8.8.4.4#dns.google 2001:4860:4860::8888#dns.google 2001:4860:4860::8844#dns.google# Quad9: 9.9.9.9#dns.quad9.net 149.112.112.112#dns.quad9.net 2620:fe::fe#dns.quad9.net 2620:fe::9#dns.quad9.netDNS=172.16.0.2#FallbackDNS=#Domains=#DNSSEC=no#DNSOverTLS=no#MulticastDNS=no#LLMNR=no#Cache=yes#CacheFromLocalhost=no#DNSStubListener=yes#DNSStubListenerExtra=#ReadEtcHosts=yes#ResolveUnicastSingleLabel=no[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$ sudo systemctl restart systemd-resolved.service
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$ sudo tail /etc/resolv.conf
# Third party programs should typically not access this file directly, but only# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a# different way, replace this symlink by a static file or a different symlink.## See man:systemd-resolved.service(8) for details about the supported modes of# operation for /etc/resolv.conf.
nameserver 172.16.0.2
nameserver 169.254.169.253
search .
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$
名前解決とアクセスをしてみます。
[ssm-user@ip-10-0-0-26 openvpn]$ nslookup alb.yamashita-test-20250429-vpnvpc.com
Server: 172.16.0.2
Address: 172.16.0.2#53
Non-authoritative answer:
alb.yamashita-test-20250429-vpnvpc.com canonical name = internal-clientvpnalb-41888842.ap-northeast-1.elb.amazonaws.com.
Name: internal-clientvpnalb-41888842.ap-northeast-1.elb.amazonaws.com
Address: 172.16.0.61
Name: internal-clientvpnalb-41888842.ap-northeast-1.elb.amazonaws.com
Address: 172.16.0.69[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$ nslookup alb.yamashita-test-20250429-onpre.com
Server: 172.16.0.2
Address: 172.16.0.2#53
Non-authoritative answer:
alb.yamashita-test-20250429-onpre.com canonical name = internal-onpremalb-1107595459.ap-northeast-1.elb.amazonaws.com.
Name: internal-onpremalb-1107595459.ap-northeast-1.elb.amazonaws.com
Address: 10.0.0.69
Name: internal-onpremalb-1107595459.ap-northeast-1.elb.amazonaws.com
Address: 10.0.0.59[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$ curl alb.yamashita-test-20250429-vpnvpc.com
This is Client VPN side ALB.
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$ curl alb.yamashita-test-20250429-onpre.com
This is on-premises side ALB.
[ssm-user@ip-10-0-0-26 openvpn]$
[ssm-user@ip-10-0-0-26 openvpn]$
問題なく出来ました。ルートテーブルも見てみます。
[ssm-user@ip-10-0-0-26 bin]$ ip route
default via 10.0.0.1 dev ens5 proto dhcp src 10.0.0.26 metric 51210.0.0.0/27 dev ens5 proto kernel scope link src 10.0.0.26 metric 51210.0.0.1 dev ens5 proto dhcp scope link src 10.0.0.26 metric 512169.254.169.253 via 10.0.0.1 dev ens5 proto dhcp src 10.0.0.26 metric 512172.16.0.0/24 via 192.168.0.1 dev tun0
192.168.0.0/27 dev tun0 proto kernel scope link src 192.168.0.2[ssm-user@ip-10-0-0-26 bin]$
[ssm-user@ip-10-0-0-26 bin]$
[ssm-user@ip-10-0-0-26 bin]$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default ip-10-0-0-1.ap- 0.0.0.0 UG 51200 ens5
ip-10-0-0-0.ap- 0.0.0.0255.255.255.224 U 51200 ens5
ip-10-0-0-1.ap- 0.0.0.0255.255.255.255 UH 51200 ens5
169.254.169.253 ip-10-0-0-1.ap- 255.255.255.255 UGH 51200 ens5
ip-172-16-0-0.a ip-192-168-0-1. 255.255.255.0 UG 000 tun0
ip-192-168-0-0. 0.0.0.0255.255.255.224 U 000 tun0
[ssm-user@ip-10-0-0-26 bin]$