bagood / gpt4free Forked

Python (100.00%)

README.md

major update

2023-06-05T01:47:01+02:00

example_gpt-3.5-turbo.py

Merge pull request 'Examples' (#5) from bkutasi/gpt4free:examples into main Reviewed-on: https://gitler.moe/g4f/gpt4free/pulls/5

2023-06-11T22:14:03+09:00

example_gpt-4.py

Merge pull request 'Examples' (#5) from bkutasi/gpt4free:examples into main Reviewed-on: https://gitler.moe/g4f/gpt4free/pulls/5

2023-06-11T22:14:03+09:00

example_server.py

Merge pull request 'Examples' (#5) from bkutasi/gpt4free:examples into main Reviewed-on: https://gitler.moe/g4f/gpt4free/pulls/5

2023-06-11T22:14:03+09:00

g4f

Merge pull request 'Examples' (#5) from bkutasi/gpt4free:examples into main Reviewed-on: https://gitler.moe/g4f/gpt4free/pulls/5

2023-06-11T22:14:03+09:00

interference

Merge pull request 'Examples' (#5) from bkutasi/gpt4free:examples into main Reviewed-on: https://gitler.moe/g4f/gpt4free/pulls/5

2023-06-11T22:14:03+09:00

unstable g4f-v2 early-beta, only for developers !!

interference opneai-proxy api (use with openai python package)

run server:

python3 -m interference.app
import openai

openai.api_key = ''
openai.api_base = 'http://127.0.0.1:1337'

chat_completion = openai.ChatCompletion.create(stream=True,
    model='gpt-3.5-turbo', messages=[{'role': 'user', 'content': 'write a poem about a tree'}])

#print(chat_completion.choices[0].message.content)

for token in chat_completion:
    
    content = token['choices'][0]['delta'].get('content')
    if content != None:
        print(content)

simple usage:

providers:

from g4f.Provider import (
    Phind, 
    You, 
    Bing, 
    Openai, 
    Yqcloud, 
    Theb, 
    Aichat,
    Ora,
    Aws,
    Bard,
    Vercel,
    Pierangelo,
    Forefront
)


# usage:
response = g4f.ChatCompletion.create(..., provider=ProviderName)
import g4f


print(g4f.Provider.Ails.params) # supported args

# Automatic selection of provider

# streamed completion
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', messages=[
                                     {"role": "user", "content": "Hello world"}], stream=True)

for message in response:
    print(message)

# normal response
response = g4f.ChatCompletion.create(model=g4f.Models.gpt_4, messages=[
                                     {"role": "user", "content": "hi"}]) # alterative model setting

print(response)


# Set with provider
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Provider.Openai, messages=[
                                     {"role": "user", "content": "Hello world"}], stream=True)

for message in response:
    print(message)

Dev

(more instructions soon)
the g4f.Providerclass

default:

./g4f/Provider/Providers/ProviderName.py:

import os


url: str = 'https://{site_link}'
model: str = 'gpt-[version]'

def _create_completion(prompt: str, args...):
    return ...
    or
    yield ...


params = f'g4f.Provider.{os.path.basename(__file__)[:-3]} supports: ' + \
    ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])

Clone


About this repository

0

1

0


Last commit

Merge pull request 'Examples' (#5) from bkutasi/gpt4free:examples into main Reviewed-on: https://gitler.moe/g4f/gpt4free/pulls/5
2023-06-11T22:14:03+09:00

Relise

作成中・・・

Donate

作成中・・・