Commit Graph

24 Commits

Author SHA1 Message Date
Li Jiang
1285700d7a Update readme, bump version to 2.4.0, fix CI errors (#1466)
* Update gitignore

* Bump version to 2.4.0

* Update readme

* Pre-download california housing data

* Use pre-downloaded california housing data

* Pin lightning<=2.5.6

* Fix typo in find and replace

* Fix estimators has no attribute __sklearn_tags__

* Pin torch to 2.2.2 in tests

* Fix conflict

* Update pytorch-forecasting

* Update pytorch-forecasting

* Update pytorch-forecasting

* Use numpy<2 for testing

* Update scikit-learn

* Run Build and UT every other day

* Pin pip<24.1

* Pin pip<24.1 in pipeline

* Loosen pip, install pytorch_forecasting only in py311

* Add support to new versions of nlp dependecies

* Fix formats

* Remove redefinition

* Update mlflow versions

* Fix mlflow version syntax

* Update gitignore

* Clean up cache to free space

* Remove clean up action cache

* Fix blendsearch

* Update test workflow

* Update setup.py

* Fix catboost version

* Update workflow

* Prepare for python 3.14

* Support no catboost

* Fix tests

* Fix python_requires

* Update test workflow

* Fix vw tests

* Remove python 3.9

* Fix nlp tests

* Fix prophet

* Print pip freeze for better debugging

* Fix Optuna search does not support parameters of type Float with samplers of type Quantized

* Save dependencies for later inspection

* Fix coverage.xml not exists

* Fix github action permission

* Handle python 3.13

* Address openml is not installed

* Check dependencies before run tests

* Update dependencies

* Fix syntax error

* Use bash

* Update dependencies

* Fix git error

* Loose mlflow constraints

* Add rerun, use mlflow-skinny

* Fix git error

* Remove ray tests

* Update xgboost versions

* Fix automl pickle error

* Don't test python 3.10 on macos as it's stuck

* Rebase before push

* Reduce number of branches
2026-01-09 13:40:52 +08:00
Jirka Borovec
b348cb1136 configure & apply pyupgrade with py3.8+ (#1333)
* configure pyupgrade with `py3.8+`

* apply update

---------

Co-authored-by: Li Jiang <bnujli@gmail.com>
2024-08-12 02:54:18 +00:00
Li Jiang
d8129b9211 Fix typos, upgrade yarn packages, add some improvements (#1290)
* Fix typos, upgrade yarn packages, add some improvements

* Fix joblib 1.4.0 breaks joblib-spark

* Fix xgboost test error

* Pin xgboost<2.0.0

* Try update prophet to 1.5.1

* Update github workflow

* Revert prophet version

* Update github workflow

* Update install libomp

* Fix test errors

* Fix test errors

* Add retry to test and coverage

* Revert "Add retry to test and coverage"

This reverts commit ce13097cd5.

* Increase test budget

* Add more data to test_models, try fixing ValueError: Found array with 0 sample(s) (shape=(0, 252)) while a minimum of 1 is required.
2024-07-19 13:40:04 +00:00
Gleb Levitski
3de0dc667e Add ruff sort to pre-commit and sort imports in the library (#1259)
* lint

* bump ver

* bump ver

* fixed circular import

---------

Co-authored-by: Jirka Borovec <6035284+Borda@users.noreply.github.com>
2024-03-12 21:28:57 +00:00
Chi Wang
3e7aac6e8b unify auto_reply; bug fix in UserProxyAgent; reorg agent hierarchy (#1142)
* simplify the initiation of chat

* version update

* include openai

* completion

* load config list from json

* initiate_chat

* oai config list

* oai config list

* config list

* config_list

* raise_error

* retry_time

* raise condition

* oai config list

* catch file not found

* catch openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* close #1139

* use property

* termination msg

* AIUserProxyAgent

* smaller dev container

* update notebooks

* match

* document code execution and AIUserProxyAgent

* gpt 3.5 config list

* rate limit

* variable visibility

* remove unnecessary import

* quote

* notebook comments

* remove mathchat from init import

* two users

* import location

* expose config

* return str not tuple

* rate limit

* ipython user proxy

* message

* None result

* rate limit

* rate limit

* rate limit

* rate limit

* make auto_reply a common method for all agents

* abs path

* refactor and doc

* set mathchat_termination

* code format

* modified

* emove import

* code quality

* sender -> messages

* system message

* clean agent hierarchy

* dict check

* invalid oai msg

* return

* openml error

* docstr

---------

Co-authored-by: kevin666aa <yrwu000627@gmail.com>
2023-07-25 23:46:11 +00:00
Chi Wang
2406e69496 Json config list, agent refactoring and new notebooks (#1133)
* simplify the initiation of chat

* version update

* include openai

* completion

* load config list from json

* initiate_chat

* oai config list

* oai config list

* config list

* config_list

* raise_error

* retry_time

* raise condition

* oai config list

* catch file not found

* catch openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* close #1139

* use property

* termination msg

* AIUserProxyAgent

* smaller dev container

* update notebooks

* match

* document code execution and AIUserProxyAgent

* gpt 3.5 config list

* rate limit

* variable visibility

* remove unnecessary import

* quote

* notebook comments

* remove mathchat from init import

* two users

* import location

* expose config

* return str not tuple

* rate limit

* ipython user proxy

* message

* None result

* rate limit

* rate limit

* rate limit

* rate limit
2023-07-23 13:23:09 +00:00
Jirka Borovec
a701cd82f8 set black with 120 line length (#975)
* set black with 120 line length

* apply pre-commit

* apply black
2023-04-10 19:50:40 +00:00
Mark Harley
44ddf9e104 Refactor into automl subpackage (#809)
* Refactor into automl subpackage

Moved some of the packages into an automl subpackage to tidy before the
task-based refactor. This is in response to discussions with the group
and a comment on the first task-based PR.

Only changes here are moving subpackages and modules into the new
automl, fixing imports to work with this structure and fixing some
dependencies in setup.py.

* Fix doc building post automl subpackage refactor

* Fix broken links in website post automl subpackage refactor

* Fix broken links in website post automl subpackage refactor

* Remove vw from test deps as this is breaking the build

* Move default back to the top-level

I'd moved this to automl as that's where it's used internally, but had
missed that this is actually part of the public interface so makes sense
to live where it was.

* Re-add top level modules with deprecation warnings

flaml.data, flaml.ml and flaml.model are re-added to the top level,
being re-exported from flaml.automl for backwards compatability. Adding
a deprecation warning so that we can have a planned removal later.

* Fix model.py line-endings

* Pin pytorch-lightning to less than 1.8.0

We're seeing strange lightning related bugs from pytorch-forecasting
since the release of lightning 1.8.0. Going to try constraining this to
see if we have a fix.

* Fix the lightning version pin

Was optimistic with setting it in the 1.7.x range, but that isn't
compatible with python 3.6

* Remove lightning version pin

* Revert dependency version changes

* Minor change to retrigger the build

* Fix line endings in ml.py and model.py

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Co-authored-by: EgorKraevTransferwise <egor.kraev@transferwise.com>
2022-12-06 15:46:08 -05:00
Chi Wang
92b79221b6 make performance test reproducible (#837)
* make performance test reproducible

* fix test error

* Doc update and disable logging

* document random_state and version

* remove hardcoded budget

* fix test error and dependency; close #777

* iloc
2022-12-06 10:13:39 -08:00
Chi Wang
595af7a04f install editable package in codespace (#826)
* install editable package in codespace

* fix test error in test_forecast

* fix test error in test_space

* openml version

* break tests; pre-commit

* skip on py10+win32

* install mlflow in test

* install mlflow in [test]

* skip test in windows

* import

* handle PermissionError

* skip test in windows

* skip test in windows

* skip test in windows

* skip test in windows

* remove ts_forecast_panel from doc
2022-11-27 14:22:54 -05:00
Xueqing Liu
2314cc5a7e "intermediate_results" TypeError: argument of type 'NoneType' is not iterable (#695)
* fix mlflow bug

* bump version
2022-08-22 13:36:50 -04:00
Chi Wang
816a82a115 make test result more stable (#646) 2022-08-05 10:17:41 -07:00
Chi Wang
e14e909af9 Feature names and importances (#621)
* feature names and importances

* None check

* StackingClassifier has no feature_importances_

* StackingClassifier has no feature_names_in_
2022-07-10 12:25:59 -07:00
Chi Wang
0642b6e7bb init value type match (#575)
* init value type match

* bump version to 1.0.6

* add a note about flaml version in notebook

* add note about mismatched ITER_HP

* catch SSLError when accessing OpenML data

* catch errors in autovw test

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
2022-06-09 08:11:15 -07:00
Chi Wang
49e8f7f028 use zeroshot when no budget is given; custom_hp (#563)
* use zeroshot when no budget is given; custom_hp

* update Getting-Started

* protobuf version

* X_val
2022-05-28 17:22:09 -07:00
Qingyun Wu
2cdc08a75a update notebook and test 2022-03-30 19:11:10 -07:00
Chi Wang
9128c8811a handle failing trials (#505)
* handle failing trials

* clarify when to return {}

* skip ensemble in accuracy check
2022-03-28 16:57:52 -07:00
Qingyun Wu
6c16e47e42 Bug fix and add documentation for metric_constraints (#498)
* metric constraint documentation

* update link

* update notebook

* fix a bug in adding 'time_total_s' to result

* use the default multiple factor from config file

* update notebook

* format

* improve test

* revise test budget for macos

* bug fix in adding time_total_s

* increase performance check budget

* revise test

* update notebook

* uncomment test

* remove redundancy

* clear output

* remove n_jobs

* remove constraint in notebook

* increase budget

* revise test

* add python version

* use getattr

* improve code robustness

Co-authored-by: Qingyun Wu <qxw5138@psu.edu>
2022-03-26 21:11:45 -04:00
Xueqing Liu
af423463c3 fixing bug for ner (#463)
* fixing bug for ner

* removing global var

* adding class for trial counter

* adding notebook

* adding use_ray dict

* updating documentation for nlp
2022-03-20 22:03:02 -04:00
Chi Wang
38ad31ea25 remove FLAML sample size from config (#418) 2022-01-22 22:59:44 -08:00
Chi Wang
8602def1c4 logging (#371)
* query logged runs

* mlflow log when using ray

* key check for newer version of ray #363

* catch importerror

* log and load AutoML model

* retrain if necessary when ensemble fails
2022-01-02 21:37:19 -08:00
Qingyun Wu
17b17d084f tune api for schedulers (#322)
* revise api and tests

* rename prune_attr

* update finetune notebook

* add scheduler test and notebook

* update tune api for scheduler

* remove scheduler notebook

* Update flaml/tune/tune.py

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* docstr

* fix imports

* clear notebook output

* fix ray import

* Update flaml/tune/tune.py

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* improve docstr

* Update flaml/searcher/blendsearch.py

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* remove redundant import

Co-authored-by: Qingyun Wu <qxw5138@psu.edu>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
2021-12-04 21:52:20 -05:00
Chi Wang
18230ed22f pred_time_limit clarification and logging (#319)
* pred_time_limit clarification

* log prediction time

* handle ChunkedEncodingError in test
2021-12-03 16:02:00 -08:00
Chi Wang
72caa2172d model_history, ITER_HP, settings in AutoML(), checkpoint bug fix (#283)
if save_best_model_per_estimator is False and retrain_final is True, unfit the model after evaluation in HPO.
retrain if using ray.
update ITER_HP in config after a trial is finished.
change prophet logging level.
example and notebook update.
allow settings to be passed to AutoML constructor. Are you planning to add multi-output-regression capability to FLAML #192 Is multi-tasking allowed? #277 can pass the auotml setting to the constructor instead of requiring a derived class.
remove model_history.
checkpoint bug fix.

* model_history meaning save_best_model_per_estimator

* ITER_HP

* example update

* prophet logging level

* comment update in forecast notebook

* print format improvement

* allow settings to be passed to AutoML constructor

* checkpoint bug fix

* time limit for autohf regression test

* skip slow test on macos

* cleanup before del
2021-11-18 09:39:45 -08:00