* Merged PR 1686010: Bump version to 2.3.5.post2, Distribute source and wheel, Fix license-file, Only log better models
- Fix license-file
- Bump version to 2.3.5.post2
- Distribute source and wheel
- Log better models only
- Add artifact_path to register_automl_pipeline
- Improve logging of _automl_user_configurations
----
This pull request fixes the project’s configuration by updating the license metadata for compliance with FLAML OSS 2.3.5.
The changes in `/pyproject.toml` update the project’s license and readme metadata by replacing deprecated keys with the new structured fields.
- `/pyproject.toml`: Replaced `license_file` with `license = { text = "MIT" }`.
- `/pyproject.toml`: Replaced `description-file` with `readme = "README.md"`.
<!-- GitOpsUserAgent=GitOps.Apps.Server.pullrequestcopilot -->
Related work items: #4252053
* Merged PR 1688479: Handle feature_importances_ is None, Catch RuntimeError and wait for spark cluster to recover
- Add warning message when feature_importances_ is None (#3982120)
- Catch RuntimeError and wait for spark cluster to recover (#3982133)
----
Bug fix.
This pull request prevents an AttributeError in the feature importance plotting function by adding a check for a `None` value with an informative warning message.
- `flaml/fabric/visualization.py`: Checks if `result.feature_importances_` is `None`, logs a warning with possible reasons, and returns early.
- `flaml/fabric/visualization.py`: Imports `logger` from `flaml.automl.logger` to support the warning message.
<!-- GitOpsUserAgent=GitOps.Apps.Server.pullrequestcopilot -->
Related work items: #3982120, #3982133
* Removed deprecated metadata section
* Fix log_params, log_artifact doesn't support run_id in mlflow 2.6.0
* Remove autogen
* Remove autogen
* Remove autogen
* Merged PR 1776547: Fix flaky test test_automl
Don't throw error when time budget is not enough
----
#### AI description (iteration 1)
#### PR Classification
Bug fix addressing a failing test in the AutoML notebook example.
#### PR Summary
This PR fixes a flaky test by adding a conditional check in the AutoML test that prints a message and exits early if no best estimator is set, thereby preventing unpredictable test failures.
- `test/automl/test_notebook_example.py`: Introduced a check to print "Training budget is not sufficient" and return if `automl.best_estimator` is not found.
<!-- GitOpsUserAgent=GitOps.Apps.Server.pullrequestcopilot -->
Related work items: #4573514
* Merged PR 1777952: Fix unrecognized or malformed field 'license-file' when uploading wheel to feed
Try to fix InvalidDistribution: Invalid distribution metadata: unrecognized or malformed field 'license-file'
----
Bug fix addressing package metadata configuration.
This pull request fixes the error with unrecognized or malformed license file fields during wheel uploads by updating the setup configuration.
- In `setup.py`, added `license="MIT"` and `license_files=["LICENSE"]` to provide proper license metadata.
<!-- GitOpsUserAgent=GitOps.Apps.Server.pullrequestcopilot -->
Related work items: #4560034
* Cherry-pick Merged PR 1879296: Add support to python 3.12 and spark 4.0
* Cherry-pick Merged PR 1890869: Improve time_budget estimation for mlflow logging
* Cherry-pick Merged PR 1879296: Add support to python 3.12 and spark 4.0
* Disable openai workflow
* Add python 3.12 to test envs
* Manually trigger openai
* Support markdown files with underscore-prefixed file names
* Improve save dependencies
* SynapseML is not installed
* Fix syntax error:Module !flaml/autogen was never imported
* macos 3.12 also hangs
* fix syntax error
* Update python version in actions
* Install setuptools for using pkg_resources
* Fix test_automl_performance in Github actions
* Fix test_nested_run
* Update gitignore
* Bump version to 2.4.0
* Update readme
* Pre-download california housing data
* Use pre-downloaded california housing data
* Pin lightning<=2.5.6
* Fix typo in find and replace
* Fix estimators has no attribute __sklearn_tags__
* Pin torch to 2.2.2 in tests
* Fix conflict
* Update pytorch-forecasting
* Update pytorch-forecasting
* Update pytorch-forecasting
* Use numpy<2 for testing
* Update scikit-learn
* Run Build and UT every other day
* Pin pip<24.1
* Pin pip<24.1 in pipeline
* Loosen pip, install pytorch_forecasting only in py311
* Add support to new versions of nlp dependecies
* Fix formats
* Remove redefinition
* Update mlflow versions
* Fix mlflow version syntax
* Update gitignore
* Clean up cache to free space
* Remove clean up action cache
* Fix blendsearch
* Update test workflow
* Update setup.py
* Fix catboost version
* Update workflow
* Prepare for python 3.14
* Support no catboost
* Fix tests
* Fix python_requires
* Update test workflow
* Fix vw tests
* Remove python 3.9
* Fix nlp tests
* Fix prophet
* Print pip freeze for better debugging
* Fix Optuna search does not support parameters of type Float with samplers of type Quantized
* Save dependencies for later inspection
* Fix coverage.xml not exists
* Fix github action permission
* Handle python 3.13
* Address openml is not installed
* Check dependencies before run tests
* Update dependencies
* Fix syntax error
* Use bash
* Update dependencies
* Fix git error
* Loose mlflow constraints
* Add rerun, use mlflow-skinny
* Fix git error
* Remove ray tests
* Update xgboost versions
* Fix automl pickle error
* Don't test python 3.10 on macos as it's stuck
* Rebase before push
* Reduce number of branches
* Sync Fabric till 2cd1c3da
* Remove synapseml from tag names
* Fix 'NoneType' object has no attribute 'DataFrame'
* Deprecated 3.8 support
* Fix 'NoneType' object has no attribute 'DataFrame'
* Still use python 3.8 for pydoc
* Don't run tests in parallel
* Remove autofe and lowcode
* Refactor into automl subpackage
Moved some of the packages into an automl subpackage to tidy before the
task-based refactor. This is in response to discussions with the group
and a comment on the first task-based PR.
Only changes here are moving subpackages and modules into the new
automl, fixing imports to work with this structure and fixing some
dependencies in setup.py.
* Fix doc building post automl subpackage refactor
* Fix broken links in website post automl subpackage refactor
* Fix broken links in website post automl subpackage refactor
* Remove vw from test deps as this is breaking the build
* Move default back to the top-level
I'd moved this to automl as that's where it's used internally, but had
missed that this is actually part of the public interface so makes sense
to live where it was.
* Re-add top level modules with deprecation warnings
flaml.data, flaml.ml and flaml.model are re-added to the top level,
being re-exported from flaml.automl for backwards compatability. Adding
a deprecation warning so that we can have a planned removal later.
* Fix model.py line-endings
* Pin pytorch-lightning to less than 1.8.0
We're seeing strange lightning related bugs from pytorch-forecasting
since the release of lightning 1.8.0. Going to try constraining this to
see if we have a fix.
* Fix the lightning version pin
Was optimistic with setting it in the 1.7.x range, but that isn't
compatible with python 3.6
* Remove lightning version pin
* Revert dependency version changes
* Minor change to retrigger the build
* Fix line endings in ml.py and model.py
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Co-authored-by: EgorKraevTransferwise <egor.kraev@transferwise.com>
* install editable package in codespace
* fix test error in test_forecast
* fix test error in test_space
* openml version
* break tests; pre-commit
* skip on py10+win32
* install mlflow in test
* install mlflow in [test]
* skip test in windows
* import
* handle PermissionError
* skip test in windows
* skip test in windows
* skip test in windows
* skip test in windows
* remove ts_forecast_panel from doc
* skip in-search-space check for small max iter
* resolve Pickle Transformer #730
* resolve default config unrecognized #784
* Change definition of init_config
* copy points_to_evaluate
* make test pass
* check learner selector
* rm classification head in nlp
* rm classification head in nlp
* rm classification head in nlp
* adding test cases for switch classification head
* adding test cases for switch classification head
* Update test/nlp/test_autohf_classificationhead.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* adding test cases for switch classification head
* run each test separately
* skip classification head test on windows
* disabling wandb reporting
* fix test nlp custom metric
* fix test nlp custom metric
* fix test nlp custom metric
* fix test nlp custom metric
* fix test nlp custom metric
* fix test nlp custom metric
* fix test nlp custom metric
* fix test nlp custom metric
* fix test nlp custom metric
* fix test nlp custom metric
* fix test nlp custom metric
* Update website/docs/Examples/AutoML-NLP.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/Examples/AutoML-NLP.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* fix test nlp custom metric
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* FLAML_sample_size
* clean up
* starting_points as a list
* catch AssertionError
* per estimator sample size
* import
* per estimator min_sample_size
* Update flaml/automl.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update test/automl/test_warmstart.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* add warnings
* adding more tests
* fix a bug in validating starting points
* improve test
* revise test
* revise test
* documentation about custom_hp
* doc and efficiency
* update test
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* refactoring TransformersEstimator to support default and custom_hp
* handling starting_points not in search space
* addressing starting point more than max_iter
* fixing upper < lower bug
* fix checkpoint naming + trial id for non-ray mode, fix the bug in running test mode, delete all the checkpoints in non-ray mode
* finished testing for checkpoint naming, delete checkpoint, ray, max iter = 1
* adding predict_proba, address PR 293's comments
close#293#291
if save_best_model_per_estimator is False and retrain_final is True, unfit the model after evaluation in HPO.
retrain if using ray.
update ITER_HP in config after a trial is finished.
change prophet logging level.
example and notebook update.
allow settings to be passed to AutoML constructor. Are you planning to add multi-output-regression capability to FLAML #192 Is multi-tasking allowed? #277 can pass the auotml setting to the constructor instead of requiring a derived class.
remove model_history.
checkpoint bug fix.
* model_history meaning save_best_model_per_estimator
* ITER_HP
* example update
* prophet logging level
* comment update in forecast notebook
* print format improvement
* allow settings to be passed to AutoML constructor
* checkpoint bug fix
* time limit for autohf regression test
* skip slow test on macos
* cleanup before del