-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
General AI ML #2
Conversation
|
For Unit 2:
|
For Unit 4:
|
Hi @huiwen99 Thanks for the feedback. I've made the updates. I've commented the updates next to your original comment above. |
Hi @huiwen99 Thanks for the feedback. I've made the updates. I've commented the updates next to your original comment above. |
Thanks @WaseemSheriff ! Mostly looks good for this unit now.
|
Some replies to @huiwen99 's questions here:
Good catch, obvious typo, fixed.
I strongly disagree with point 2 for a couple reasons:
So, if anything, if we wish to prescribe a CPU architecture, we should consider mandating that they be built for
Similar to 1, straightforward fix, changed.
These notebooks are actually running on the GCP JupyterLab instance, as the screenshots in the notebook indicate. But there's going to be additional notebooks/guides on how to get things running on GCP which should be in the |
My bad, yea let's stick to the AMD version. Thank you so much! |
In that case, I think that should be everything for the general unit (pending |
* initial commit * initial commit * General AI ML (#2) * initial commit * add Unit 1 files * update Unit 1 files * add Unit 2 files * updates from Unit 1 feedback * updates from Unit 1 feedback * feat: rename fine-tuning notebook * Unit 2 updates * add Unit 4 files * add Unit 4 files * feat: add line about cloud docker * feat: gcp boilerplate docker * feat: update imports * updates from Unit 2 feedback * updates from Unit 2 feedback * updates from Unit 4 feedback * add Unit 4 files * fix: 4.4.1 typos * update file numbering system * fix: typos --------- Co-authored-by: waseem-ga <[email protected]> --------- Co-authored-by: waseem-ga <[email protected]> Co-authored-by: WaseemSheriff <[email protected]>
Feedback received from @huiwen99 :
re 1: it might just be practical to do so anyway, given the environment complications that might happen when setting up both tensorflow and torch in the same environment (if they don't properly use an environment manager)
re 2: I think this makes sense too. so the necessary changes would be:
1.08 Intro to Transformer Architecture
can become1.07
1.08 Common libraries
was mislabelled (it was originally1.09
) but this is fine since it is now1.08
1.07 How to train a model
can become1.09
1.09 Resources
can become1.10
re 3: not sure what you mean by this @huiwen99, do you mean the torch.optim part of the torch library? or the torch-optimizer package?
re 4: agree, we'll be giving participants a deeper dive into Docker in later units anyhow