フォーク元 g4f/gpt4free
modification
このコミットが含まれているのは:
コミット
734d984dd9
80
README.md
80
README.md
|
@ -1,13 +1,27 @@
|
||||||
unstable g4f-v2 early-beta, only for developers !!
|
markdown
|
||||||
|
|
||||||
### interference opneai-proxy api (use with openai python package)
|
# g4f-v2: An Unstable Early-Beta Interference OpenAI Proxy API (For Developers)
|
||||||
|
|
||||||
run server:
|
**Note: This version of g4f is still unstable and intended for developers only. Use it with caution.**
|
||||||
```sh
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
g4f-v2 is a library that acts as an intermediary between your application and the OpenAI GPT-3.5 Turbo language model. It provides an API for interacting with the model and handling chat completions.
|
||||||
|
|
||||||
|
## Running the Server
|
||||||
|
|
||||||
|
To start the g4f-v2 server, run the following command:
|
||||||
|
|
||||||
|
```shell
|
||||||
python3 -m interference.app
|
python3 -m interference.app
|
||||||
```
|
|
||||||
|
|
||||||
```py
|
Usage Examples
|
||||||
|
Using the OpenAI Python Package
|
||||||
|
|
||||||
|
First, ensure you have the OpenAI Python package installed. You can then configure it to use g4f-v2 as the API endpoint:
|
||||||
|
|
||||||
|
python
|
||||||
|
|
||||||
import openai
|
import openai
|
||||||
|
|
||||||
openai.api_key = ''
|
openai.api_key = ''
|
||||||
|
@ -16,70 +30,46 @@ openai.api_base = 'http://127.0.0.1:1337'
|
||||||
chat_completion = openai.ChatCompletion.create(stream=True,
|
chat_completion = openai.ChatCompletion.create(stream=True,
|
||||||
model='gpt-3.5-turbo', messages=[{'role': 'user', 'content': 'write a poem about a tree'}])
|
model='gpt-3.5-turbo', messages=[{'role': 'user', 'content': 'write a poem about a tree'}])
|
||||||
|
|
||||||
#print(chat_completion.choices[0].message.content)
|
|
||||||
|
|
||||||
for token in chat_completion:
|
for token in chat_completion:
|
||||||
|
|
||||||
content = token['choices'][0]['delta'].get('content')
|
content = token['choices'][0]['delta'].get('content')
|
||||||
if content != None:
|
if content is not None:
|
||||||
print(content)
|
print(content)
|
||||||
```
|
|
||||||
|
|
||||||
### simple usage:
|
Simple Usage
|
||||||
|
|
||||||
providers:
|
g4f-v2 supports multiple providers, including g4f.Providers.You, g4f.Providers.Ails, and g4f.Providers.Phind. Here's how you can use them:
|
||||||
```py
|
|
||||||
g4f.Providers.You
|
|
||||||
g4f.Providers.Ails
|
|
||||||
g4f.Providers.Phind
|
|
||||||
|
|
||||||
# usage:
|
python
|
||||||
|
|
||||||
response = g4f.ChatCompletion.create(..., provider=g4f.Providers.ProviderName)
|
|
||||||
```
|
|
||||||
|
|
||||||
```py
|
|
||||||
import g4f
|
import g4f
|
||||||
|
|
||||||
|
# Accessing provider parameters
|
||||||
|
print(g4f.Providers.Ails.params) # Displays supported arguments
|
||||||
|
|
||||||
print(g4f.Providers.Ails.params) # supported args
|
# Automatic provider selection
|
||||||
|
|
||||||
# Automatic selection of provider
|
|
||||||
|
|
||||||
# streamed completion
|
|
||||||
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', messages=[
|
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', messages=[
|
||||||
{"role": "user", "content": "Hello world"}], stream=True)
|
{"role": "user", "content": "Hello world"}], stream=True)
|
||||||
|
|
||||||
for message in response:
|
for message in response:
|
||||||
print(message)
|
print(message)
|
||||||
|
|
||||||
# normal response
|
# Using a specific provider
|
||||||
response = g4f.ChatCompletion.create(model=g4f.Models.gpt_4, messages=[
|
|
||||||
{"role": "user", "content": "hi"}]) # alterative model setting
|
|
||||||
|
|
||||||
print(response)
|
|
||||||
|
|
||||||
|
|
||||||
# Set with provider
|
|
||||||
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Providers.Phind, messages=[
|
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Providers.Phind, messages=[
|
||||||
{"role": "user", "content": "Hello world"}], stream=True)
|
{"role": "user", "content": "Hello world"}], stream=True)
|
||||||
|
|
||||||
for message in response:
|
for message in response:
|
||||||
print(message)
|
print(message)
|
||||||
```
|
|
||||||
|
|
||||||
### Dev
|
Development
|
||||||
|
|
||||||
(more instructions soon)
|
In the development section, we'll cover more instructions soon. The g4f.Providers class is a crucial component of the library. You can define default providers and their behavior in separate files within the g4f/Providers directory. Each provider file should have the following structure:
|
||||||
the `g4f.Providers`class
|
|
||||||
|
|
||||||
default:
|
./g4f/Providers/ProviderName.py:
|
||||||
|
|
||||||
|
python
|
||||||
|
|
||||||
`./g4f/Providers/ProviderName.py`:
|
|
||||||
```python
|
|
||||||
import os
|
import os
|
||||||
|
|
||||||
|
|
||||||
url: str = 'https://{site_link}'
|
url: str = 'https://{site_link}'
|
||||||
model: str = 'gpt-[version]'
|
model: str = 'gpt-[version]'
|
||||||
|
|
||||||
|
@ -88,7 +78,5 @@ def _create_completion(prompt: str, args...):
|
||||||
or
|
or
|
||||||
yield ...
|
yield ...
|
||||||
|
|
||||||
|
|
||||||
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
|
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
|
||||||
', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
|
', '.join([f"{name}: {get_type_hints(_create_completion)[name]
|
||||||
```
|
|
||||||
|
|
読み込み中…
新しいイシューから参照