フォーク元 g4f/gpt4free
1
1
フォーク 0
このコミットが含まれているのは:
EbaAaZ 2023-05-17 09:57:36 -04:00
コミット 734d984dd9
1個のファイルの変更34行の追加46行の削除

ファイルの表示

@ -1,13 +1,27 @@
unstable g4f-v2 early-beta, only for developers !!
markdown
### interference opneai-proxy api (use with openai python package)
# g4f-v2: An Unstable Early-Beta Interference OpenAI Proxy API (For Developers)
run server:
```sh
**Note: This version of g4f is still unstable and intended for developers only. Use it with caution.**
## Introduction
g4f-v2 is a library that acts as an intermediary between your application and the OpenAI GPT-3.5 Turbo language model. It provides an API for interacting with the model and handling chat completions.
## Running the Server
To start the g4f-v2 server, run the following command:
```shell
python3 -m interference.app
```
```py
Usage Examples
Using the OpenAI Python Package
First, ensure you have the OpenAI Python package installed. You can then configure it to use g4f-v2 as the API endpoint:
python
import openai
openai.api_key = ''
@ -16,70 +30,46 @@ openai.api_base = 'http://127.0.0.1:1337'
chat_completion = openai.ChatCompletion.create(stream=True,
model='gpt-3.5-turbo', messages=[{'role': 'user', 'content': 'write a poem about a tree'}])
#print(chat_completion.choices[0].message.content)
for token in chat_completion:
content = token['choices'][0]['delta'].get('content')
if content != None:
if content is not None:
print(content)
```
### simple usage:
Simple Usage
providers:
```py
g4f.Providers.You
g4f.Providers.Ails
g4f.Providers.Phind
g4f-v2 supports multiple providers, including g4f.Providers.You, g4f.Providers.Ails, and g4f.Providers.Phind. Here's how you can use them:
# usage:
python
response = g4f.ChatCompletion.create(..., provider=g4f.Providers.ProviderName)
```
```py
import g4f
# Accessing provider parameters
print(g4f.Providers.Ails.params) # Displays supported arguments
print(g4f.Providers.Ails.params) # supported args
# Automatic selection of provider
# streamed completion
# Automatic provider selection
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', messages=[
{"role": "user", "content": "Hello world"}], stream=True)
for message in response:
print(message)
# normal response
response = g4f.ChatCompletion.create(model=g4f.Models.gpt_4, messages=[
{"role": "user", "content": "hi"}]) # alterative model setting
print(response)
# Set with provider
# Using a specific provider
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Providers.Phind, messages=[
{"role": "user", "content": "Hello world"}], stream=True)
for message in response:
print(message)
```
### Dev
Development
(more instructions soon)
the `g4f.Providers`class
In the development section, we'll cover more instructions soon. The g4f.Providers class is a crucial component of the library. You can define default providers and their behavior in separate files within the g4f/Providers directory. Each provider file should have the following structure:
default:
./g4f/Providers/ProviderName.py:
python
`./g4f/Providers/ProviderName.py`:
```python
import os
url: str = 'https://{site_link}'
model: str = 'gpt-[version]'
@ -88,7 +78,5 @@ def _create_completion(prompt: str, args...):
or
yield ...
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
```
', '.join([f"{name}: {get_type_hints(_create_completion)[name]