-
Notifications
You must be signed in to change notification settings - Fork 651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewrite backend to be more modular, maintainable #70
Rewrite backend to be more modular, maintainable #70
Conversation
f0c80a6
to
b07ec3c
Compare
The docs for the backend look good (ie I read docs and code and understand what's going on). However, here are some things I noticed that (should) be a quick fix:
|
columns = property(_get_columns, _set_columns) | ||
index = property(_get_index, _set_index) | ||
|
||
# END Index and columns objects |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Redundant line, collides with the line after compute_index
|
||
# END Index and columns objects | ||
|
||
def compute_index(self, data_object): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have to be python2 compatible? If not I would like to add type annotations:
def compute_index(self, data_object: BlockPartition) -> pandas.Index:
...
""" | ||
raise NotImplementedError("Must be implemented in child class") | ||
|
||
def to_pandas(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note:
- Contrary what I said in the meeting offline, I'm fine with where
to_pandas
is right now because converting any sort of block to any representation is important.RemotePartition::to_pandas
is a good interface so any future implementation should implement ato_pandas
joined_axis = self._join_index_objects(axis, [other.columns if axis == 0 | ||
else other.index for other in others], join, sort=sort) | ||
|
||
to_append = [other.reindex(axis ^ 1, joined_axis).data for other in others] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
request documentation
fd24d0f
to
6523a4a
Compare
6523a4a
to
ed93e6b
Compare
Adding some partitioning updates Updating remote Continuing backend rewrite progress Adding idxmax/min Adding head/tail/repr for data_manager Fixing transpose Updating remote with bugfixes for transpose/repr Fixing a number of operations Fix quantile Adding more functionality, __getitem__ Fixing more tests Fixing drop, passing tests Add insert to new structure Cleaning up unneeded imports Updating remote Fix minor bug Updating remote Add sort_index Minor refactor of code. Cleaning some up Continuing logic migration Add some docs Adding more docs Add more method level documentation. Adding documentation and cleaning up Retructuring partitioning files for simplicity Adding more docs, cleaning up docs Fix performance bug, more cleanup More cleanup/renaming Adding factory skeleton Adding from_pandas code path and update constructor Removing debugging code Cleaning up dead code Added type checking and changed how variables were read in from kwargs (#1) Removing IndexMetadata and code_gen Adding preliminary apply method Updated sample and eval to the new backend (#2) * Added type checking and changed how variables were read in from kwargs * Updated sample to new architecture * Made test_sample more rigourous * Removed 'default=' from kwargs.get's * Updated eval to the new backend * Added two more tests for eval Finalizing apply and start agg Fixing some broken stuff with apply, update remote Starting dictionary apply and fillna Fixed dictionary apply and fillna Moves Inter DataFrame Operations Logic to Data Manager (#3) * Moving multi dataframe operation logic to data_manager * Remove Unused Functions from dataframe.py * removed unnecessary isScalar arg * minor code cleanup * changed _operator_handler name * removing hasattr from data_manager * cleaning up dataframe.py code for add function * changing name to _validate_other * cleaned up kwargs parsing in data_manager function * updated all inter df functions * commenting out old helper functions in dataframe.py * cleaned up unused code * fixed type error for functions using map_across_axis Updated info and memory_usage to new backend (#4) * Added type checking and changed how variables were read in from kwargs * Updated sample to new architecture * Made test_sample more rigourous * Removed 'default=' from kwargs.get's * Updated eval to the new backend * Added two more tests for eval * Updated memory_usage to new backend * Updated info and memory_usage to the new backend * Updated info and memory_usage to be standalone tests and updated the tests * Updated info to do only one pass * Updated info to do everything in one run with DataFrame * Update info to do everything in one run with Series * Updated info to do everything in one run with DataFrame * Updated to get everything working and moved appropriate parts to DataManager Adding first where implementation Adding sort_values and update implementations Cleaning up dead code Adding manual_shuffle abstraction Starting merge Add merge Cleaning up Add dtype (#6) * Added type checking and changed how variables were read in from kwargs * Updated sample to new architecture * Made test_sample more rigourous * Removed 'default=' from kwargs.get's * Updated eval to the new backend * Added two more tests for eval * Updated memory_usage to new backend * Updated info and memory_usage to the new backend * Updated info and memory_usage to be standalone tests and updated the tests * Updated info to do only one pass * Updated info to do everything in one run with DataFrame * Update info to do everything in one run with Series * Updated info to do everything in one run with DataFrame * Updated to get everything working and moved appropriate parts to DataManager * Removed extraneous print statement * Moved dtypes stuff to data manager * Fixed calculating dtypes to only doing a full_reduce instead of map_full_axis * Updated astype to new backend * Updated astype to new backend * Updated ftypes to new backend * Added dtypes argument to map_partitions * Fixing dtypes * Cleaning up dtype and merge issues Fix isin bug Cleaning up Cleaning up more unused code Updated iterables and to_datetime to new backend and improved astype runtime (#7) * Added type checking and changed how variables were read in from kwargs * Updated sample to new architecture * Made test_sample more rigourous * Removed 'default=' from kwargs.get's * Updated eval to the new backend * Added two more tests for eval * Updated memory_usage to new backend * Updated info and memory_usage to the new backend * Updated info and memory_usage to be standalone tests and updated the tests * Updated info to do only one pass * Updated info to do everything in one run with DataFrame * Update info to do everything in one run with Series * Updated info to do everything in one run with DataFrame * Updated to get everything working and moved appropriate parts to DataManager * Removed extraneous print statement * Moved dtypes stuff to data manager * Fixed calculating dtypes to only doing a full_reduce instead of map_full_axis * Updated astype to new backend * Updated astype to new backend * Updated ftypes to new backend * Added dtypes argument to map_partitions * Updated astype and added dtypes option to _from_old_block_partitions in RayPandasDataManager * Undid unnecessary change * Updated iterables to new backend * Updated to_datetime to new backend * Reverted some changes for PR * Replaced pd with pandas * Made additional changes mentioned in (#7) Cleaning up Cleaning up imports Fix minor bug from getting kwargs Concat now working with new architecture (#9) * concat now working with new architecture * fixing functionality for pandas Series * updated append_list_of_data_managers function for concat * minor stylistic fix * remove unused append_data_manager function * fixed join * removed axis arg from join function read_csv changes and improvements in performance (#10) * Test changes to io * Update io changes * Fix performance bug * Debugging performance * Debugging performance on large IO * Making some performance tuning changes * Cleaning up and adding performance improvements * Cleaning up * Addressing comments * Addressing comments Fix bug Formatting fix fillna bug updated rdiv, rpow, rsub methods (#12) * updated rdiv, rpow, rsub methods * spelled dataframe wrong Fixed eval and astype (#11) * Updated to_datetime docstring * Updated astype tests * Commented out loc and iloc tests * Updated eval * removed empty space and uncommented test_loc and test_iloc Passes test_mixed_dtype_dataframe and test_nan_dataframe (#15) * Fixed describe and quantiles and cleaned up code * Updated numeric functions and handles empty dataframes * Fixed dtypes and ftypes * Imported is_numeric_dtype from pandas * Cleaned up print statements in test_dataframe.py Cleaning up and enabling tests. Fix __repr__ Removing dead code Fix where bugy Fix append error checking Fix read_csv args bug Fix read_parquet Groupby implementation Adding groupby final fix Adding docs Fix for info (#16) * Quick fix for info * Removed extraneous print statement * Restructured to use count and memory_usage instead Minor optimization change get_dummies implementation (#19) * intial code for get_dummies * Starting help on get_dummies * Fix get_dummies * Removing dead code * bug fix for get_dummies Rewrite loc (#20) * Rewrite the rewrite Finish implement loc/iloc Remove debug lines, fix typo Removing unused imports * Removing dead code * Changing naming of clone * Formatting and removing dead code * Moving imports for matching pandas
ed93e6b
to
36be7cb
Compare
ba3dd32
to
5e22bf5
Compare
…nd other numeric functions (#18) * Cleaned up code and added documentation * Worked on documentation * Added periods. * Changed all to follow the documentation * Added more method level documentation and fixed quantile with list input * Updated quantiles * Fixed dataframe to manager and reducesd descriptions to one line in data_manager.py documentation' * Fixed quantiles for sure this time * Changed numeric functions to be cleaner and work properly * Resolved comments
ac67bd7
to
9bcd0f6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to merge anytime. Awesome work!!! It comes together nicely.
This will touch essentially all of the code. We are currently heavily integrated with the internals of Ray. There are a number of places where we write and call custom
ray.remote
functions fromdataframe.py
. This PR is a rework of that approach to a more modular approach that will allow us to use the backend from a number of locations. We will not have to operate directly on a DataFrame for the different APIs, they can call the newDataManager
.Additionally, there is a new
BlockPartitions
abstract class that directly interacts with the data. It does not know about theIndex
orColumns
, and operates as statelessly as possible. There is also a newRemotePartition
object that will store the data. This object has knowledge of a way to interact with it.I have currently created a
RayDataManager
,RayBlockPartitions
, andRayRemotePartition
classes for interacting on Ray. Currently this is the only supported backend, but it does make it much easier to integrate other backends. The modularity also lets new backends have custom implementations for certain methods that can speed up operations on that backend.It's not done yet, but it would be great to get some feedback on the overall approach here. I will add a checklist of things to be done to this post shortly.
Add developer docs(Happening in a subsequent PR)Clean up dead code
passes
git diff upstream/master -u -- "*.py" | flake8 --diff