フォーク元 g4f/gpt4free
1
1
フォーク 0
EbaAaZ/README.md

83 行
2.5 KiB
Markdown
Raw パーマリンク 通常表示 履歴

2023-05-17 22:57:36 +09:00
markdown
2023-05-13 19:06:04 +09:00
2023-05-17 22:57:36 +09:00
# g4f-v2: An Unstable Early-Beta Interference OpenAI Proxy API (For Developers)
2023-05-13 19:09:45 +09:00
2023-05-17 22:57:36 +09:00
**Note: This version of g4f is still unstable and intended for developers only. Use it with caution.**
## Introduction
g4f-v2 is a library that acts as an intermediary between your application and the OpenAI GPT-3.5 Turbo language model. It provides an API for interacting with the model and handling chat completions.
## Running the Server
To start the g4f-v2 server, run the following command:
```shell
2023-05-13 19:23:42 +09:00
python3 -m interference.app
2023-05-13 19:06:04 +09:00
2023-05-17 22:57:36 +09:00
Usage Examples
Using the OpenAI Python Package
First, ensure you have the OpenAI Python package installed. You can then configure it to use g4f-v2 as the API endpoint:
python
2023-05-13 19:06:04 +09:00
import openai
openai.api_key = ''
openai.api_base = 'http://127.0.0.1:1337'
chat_completion = openai.ChatCompletion.create(stream=True,
model='gpt-3.5-turbo', messages=[{'role': 'user', 'content': 'write a poem about a tree'}])
for token in chat_completion:
content = token['choices'][0]['delta'].get('content')
2023-05-17 22:57:36 +09:00
if content is not None:
2023-05-13 19:06:04 +09:00
print(content)
2023-05-17 22:57:36 +09:00
Simple Usage
2023-05-15 09:22:55 +09:00
2023-05-17 22:57:36 +09:00
g4f-v2 supports multiple providers, including g4f.Providers.You, g4f.Providers.Ails, and g4f.Providers.Phind. Here's how you can use them:
2023-05-15 09:22:55 +09:00
2023-05-17 22:57:36 +09:00
python
2023-05-15 09:22:55 +09:00
2023-05-13 19:06:04 +09:00
import g4f
2023-05-17 22:57:36 +09:00
# Accessing provider parameters
print(g4f.Providers.Ails.params) # Displays supported arguments
2023-05-13 19:06:04 +09:00
2023-05-17 22:57:36 +09:00
# Automatic provider selection
2023-05-13 19:06:04 +09:00
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', messages=[
{"role": "user", "content": "Hello world"}], stream=True)
for message in response:
print(message)
2023-05-17 22:57:36 +09:00
# Using a specific provider
2023-05-13 19:06:04 +09:00
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Providers.Phind, messages=[
{"role": "user", "content": "Hello world"}], stream=True)
for message in response:
print(message)
2023-05-17 22:57:36 +09:00
Development
2023-05-13 19:06:04 +09:00
2023-05-17 22:57:36 +09:00
In the development section, we'll cover more instructions soon. The g4f.Providers class is a crucial component of the library. You can define default providers and their behavior in separate files within the g4f/Providers directory. Each provider file should have the following structure:
2023-05-13 18:39:01 +09:00
2023-05-17 22:57:36 +09:00
./g4f/Providers/ProviderName.py:
2023-05-13 18:39:01 +09:00
2023-05-17 22:57:36 +09:00
python
2023-05-13 18:39:01 +09:00
2023-05-17 22:57:36 +09:00
import os
2023-05-13 18:39:01 +09:00
url: str = 'https://{site_link}'
model: str = 'gpt-[version]'
def _create_completion(prompt: str, args...):
return ...
or
yield ...
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
2023-05-17 22:57:36 +09:00
', '.join([f"{name}: {get_type_hints(_create_completion)[name]