From a30d1985304396d7afb969d657d7af1ffb29dd96 Mon Sep 17 00:00:00 2001 From: Chi Wang Date: Sat, 10 Jun 2023 18:03:49 -0700 Subject: [PATCH] Fix documentation (#1075) * Fix indentation in documentation * newline * version --- README.md | 4 +- ...nt_auto_feedback_from_code_execution.ipynb | 2 +- notebook/autogen_agent_human_feedback.ipynb | 2 +- notebook/autogen_agent_web_info.ipynb | 2 +- website/docs/Getting-Started.md | 44 +++++++++---------- 5 files changed, 28 insertions(+), 26 deletions(-) diff --git a/README.md b/README.md index e645841bd..aa6a66afe 100644 --- a/README.md +++ b/README.md @@ -14,8 +14,10 @@

-:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web) +:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web). + :fire: [autogen](https://microsoft.github.io/FLAML/docs/Use-Cases/Auto-Generation) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673). + :fire: FLAML supports AutoML and Hyperparameter Tuning features in [Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview) private preview. Sign up for these features at: https://aka.ms/fabric/data-science/sign-up. diff --git a/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb b/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb index 686e28d7e..30e767c36 100644 --- a/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb +++ b/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "# %pip install flaml[autogen]" + "# %pip install flaml[autogen]==2.0.0rc1" ] }, { diff --git a/notebook/autogen_agent_human_feedback.ipynb b/notebook/autogen_agent_human_feedback.ipynb index be702636c..682a842cd 100644 --- a/notebook/autogen_agent_human_feedback.ipynb +++ b/notebook/autogen_agent_human_feedback.ipynb @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "# %pip install flaml[autogen]" + "# %pip install flaml[autogen]==2.0.0rc1" ] }, { diff --git a/notebook/autogen_agent_web_info.ipynb b/notebook/autogen_agent_web_info.ipynb index 2b11cd09c..2647fa716 100644 --- a/notebook/autogen_agent_web_info.ipynb +++ b/notebook/autogen_agent_web_info.ipynb @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "# %pip install flaml[autogen]" + "# %pip install flaml[autogen]==2.0.0rc1" ] }, { diff --git a/website/docs/Getting-Started.md b/website/docs/Getting-Started.md index 2ab63b47b..f511f2500 100644 --- a/website/docs/Getting-Started.md +++ b/website/docs/Getting-Started.md @@ -23,30 +23,30 @@ There are several ways of using flaml: #### (New) [Auto Generation](/docs/Use-Cases/Auto-Generation) Maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4, including: - - A drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, templating, filtering. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets. - ```python - from flaml import oai +- A drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, templating, filtering. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets. +```python +from flaml import oai - # perform tuning - config, analysis = oai.Completion.tune( - data=tune_data, - metric="success", - mode="max", - eval_func=eval_func, - inference_budget=0.05, - optimization_budget=3, - num_samples=-1, - ) +# perform tuning +config, analysis = oai.Completion.tune( + data=tune_data, + metric="success", + mode="max", + eval_func=eval_func, + inference_budget=0.05, + optimization_budget=3, + num_samples=-1, +) - # perform inference for a test instance - response = oai.Completion.create(context=test_instance, **config) - ``` - - LLM-driven intelligent agents which can perform tasks autonomously or with human feedback, including tasks that require using tools via code. For example, - ```python - assistant = AssistantAgent("assistant") - user = UserProxyAgent("user", human_input_mode="TERMINATE") - assistant.receive("Draw a rocket and save to a file named 'rocket.svg'") - ``` +# perform inference for a test instance +response = oai.Completion.create(context=test_instance, **config) +``` +- LLM-driven intelligent agents which can perform tasks autonomously or with human feedback, including tasks that require using tools via code. For example, +```python +assistant = AssistantAgent("assistant") +user = UserProxyAgent("user", human_input_mode="TERMINATE") +assistant.receive("Draw a rocket and save to a file named 'rocket.svg'") +``` #### [Task-oriented AutoML](/docs/Use-Cases/task-oriented-automl)