Commit 67316f9e by PLN (Algolia)

Content: 01 iteration

parent 11fea5d6
......@@ -104,6 +104,8 @@ $$ I_{xx}=\int\int_Ry^2f(x,y)\cdot{}dydx $$
##### - [DeepDream](https://en.wikipedia.org/wiki/DeepDream)
- [Pareidolia](https://en.wikipedia.org/wiki/Pareidolia)
![bg left 100%](../tp/img/intech_dream.png)
----
......@@ -159,6 +161,12 @@ _color: black
- #### Senior ML Engineer @Algolia
![bg right](./img/01-me2.jpg)
---
## Et vous ?
<!-- Nom, genre de musique pref, plat pref -->
---
## C'est quoi ce cours ?
......@@ -327,6 +335,13 @@ This signifies, with successive layers, there is loss of information about the i
---
Comment je sais ce que je sais pas ?
<!-- Your strength as a rationalist is to be more surprised by fiction than reality. If your mental model can explain _anything_, it's simply useless. -->
<!-- Train/Test split : je traine sur l'un, je teste sur l'autre -->
<!-- Puis je fine-tune, mais du coup je vais peut-être finir par apprendre TEst non ? -->
<!-- Comment faire un train/test split et garder une metrique finale de **validation** ? -->
---
## Underfitting
......@@ -340,6 +355,7 @@ Comment je sais ce que je sais pas ?
<!-- Image by User:Chabacano: https://commons.wikimedia.org/wiki/File:Overfitting.svg -->
---
# Failures of ML
......@@ -385,6 +401,37 @@ REPLICATION CRISIS! Probleme commun dans toute la science, incentives, game theo
<!-- Publish or Perish, ça vous parle ? -->
<!-- Replication Crisis ? -->
---
Generative models
https://thisxdoesnotexist.com/
---
# FaceApp
![](https://cdn-images-1.medium.com/max/1024/1*XBEpvGfjv_xo7ebBYNVDNA.png)
<!-- Credit: https://laptrinhx.com/faceapp-or-how-i-learned-to-stop-worrying-and-love-the-machines-2993489094/ -->
---
# GANs
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
![bg 80%](https://www.pianshen.com/images/242/e94f2e3824d178f110112d865fbda65a.png)
---
......@@ -443,46 +490,72 @@ https://huggingface.co/spaces?sort=modified&search=galactica
---
ChatGPT:
<!--
Linda is 31 years old, single, outspoken, and very bright.
She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable?
- Linda is a bank teller.
- Linda is a bank teller and is active in the feminist movement.
-->
[Linda Problem](https://twitter.com/dggoldst/status/1598317411698089984)
---
<!-- But maybe assumption? Language ambiguous! -->
<!-- But maybe assumption? Language ambiguous! What's implicit in this text? -->
[Assumptions?](https://twitter.com/dggoldst/status/1598737445780164635)
<!-- Humans learn mistakes because of BIASES which are MOSTLY USEFUL
Machines which learn alike will by default exhibit the same tendencies!
-->
---
<!-- Fast and slow: Baseball bat & ball -->
[Computing, Fast & Slow](https://twitter.com/stefanmherzog/status/1598397009161060359)
---
<!-- Which is more likely? A 30/50 event or a 3/5 event? -->
[Math is hard tho](https://twitter.com/batwood011/status/1598389323979800576/photo/1)
---
<!-- Would you rather a 100% sure million, or 1 BILLION probability 0.99? -->
[Risk preferences](https://twitter.com/mdahardy/status/1598139462122622976/photo/1)
---
<!-- What's more likely? Canceled Flight or Canceled flight due to upcoming snowstorm? -->
[Conjuction fallacy](https://twitter.com/mdahardy/status/1598139470641262592/photo/1)
---
- See more: https://www.alignmentforum.org/posts/fFF3G4W8FbXigS4gr/cognitive-biases-in-large-language-models
<!-- What's something most people know that you wouldn't know? -->
[Meta-cognition](https://twitter.com/mdahardy/status/1598139475451817984)
---
<!-- Are windows placed near plants, or plants near windows? -->
[Physical Reasoning: Windows](https://twitter.com/mdahardy/status/1598139488873897984/photo/1)
- But https://twitter.com/mdahardy/status/1598139493856743424
- Yet [ordering lol](https://twitter.com/mdahardy/status/1598139493856743424)
---
<!-- Could you fit 100 big macs in an average car? -->
[Physical Reasoning: Big Macs](https://twitter.com/mdahardy/status/1598139498357231616/photo/1)
---
<!-- Who would win in a fight: Napoleon's army, or a hippopotamus? -->
[Physical Reasoning: Napoleon](https://twitter.com/mdahardy/status/1598139507559518208)
---
- See more: [AI Alignment Forum | Cognitive Biases in Large Language Models](https://www.alignmentforum.org/posts/fFF3G4W8FbXigS4gr/cognitive-biases-in-large-language-models)
---
TAY
-> Interface problems!
![bg right fit vertical](https://mmp483.p3cdn1.secureserver.net/wp-content/uploads/2016/03/Tay-AI-Feature-Image-03232016.jpg?time=1670290595)
![bg right fit vertical invert](https://mmp483.p3cdn1.secureserver.net/wp-content/uploads/2016/03/Tay-AI-Feature-Image-03232016.jpg?time=1670290595)
-> Interface matters!
---
......@@ -496,30 +569,80 @@ TAY
---
### Short-term goals vs long-term goals
<!-- What did these robots fail to learn? -->
<!-- What is Netflix optimizing for? -->
<!-- What is the risk of a 'devilishly-good' add to cart automated A/B testing? -->
---
<!--
_color: black
-->
### Dangerous goals :
### L'histoire du _Paperclip maximizer_
- [Try it yourself](https://www.decisionproblem.com/paperclips/index2.html) with a Clicker game :P
<!--
First described by Nick Bostrom (2003). -->
![bg crop blur](./img/00-paperclips.png)
---
### Dangerous goals : Paperclip maximizer?
> ##### _Rare photographie d'un humain après déploiement du Paperclip Maximizer_
<!-- Utilitronium: if your reward function is basic, universe becomes grey goo -->
<!-- Also, Nozick experience machine! -->
![bg right crop](https://www.sunnyskyz.com/uploads/2020/05/tqz1n-clippy-office-prank-2.jpg)
<!-- Image credit: https://www.sunnyskyz.com/blog/3056/Employee-Pranks-Entire-Office-With-An-Army-Of-Clippy-Posts -->
---
# C'est quoi du ML ?
---
Un thermostat ?
![bg fit right 90%](https://www.simulace.info/images/NegativeF.jpg)
<!-- Image credit: University of Prague - Simulace.info -->
---
Sacs de ifs
ET un sac de ifs?
![bg right fit](https://upload.wikimedia.org/wikipedia/commons/e/eb/Decision_Tree.jpg)
<!-- Image credit: User:Gilgoldm
A tree showing survival of passengers on the Titanic ("sibsp" is the number of spouses or siblings aboard). The figures under the leaves show the probability of survival and the percentage of observations in the leaf. Summarizing: Your chances of survival were good if you were (i) a female or (ii) a male at most 9.5 years old with strictly fewer than 3 siblings.
-->
---
Amazon mechanical turk
<!-- The Turk also had the ability to converse with spectators using a letter board. The operator, whose identity during the period when Kempelen presented the machine at Schönbrunn Palace is unknown,[23] was able to do this in English, French, and German. Carl Friedrich Hindenburg, a university mathematician, kept a record of the conversations during the Turk's time in Leipzig and published it in 1789 as Über den Schachspieler des Herrn von Kempelen und dessen Nachbildung (or On the Chessplayer of Mr. von Kempelen And Its Replica). Topics of questions put to and answered by the Turk included its age, marital status, and its secret workings. -->
<!-- -> What about DeepBlue? About AlphaZero? -->
![bg right fit](https://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Racknitz_-_The_Turk_3.jpg/1920px-Racknitz_-_The_Turk_3.jpg)
<!-- Image credit: Joseph Racknitz - Humboldt University Library -->
---
Eliza? Turing test ?
http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm
[Eliza](http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm)? Turing test ?
<!-- Est-ce du ML si je le confonds avec un humain ? Est-ce que ML == "capacite humaine apprise?" -->
---
Akinator ?
https://fr.akinator.com/game
Et [Akinator](https://fr.akinator.com/game) ?
---
Quelles limites aux possibilités du ML ?
......@@ -531,6 +654,7 @@ https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now
> **less than one second of thought**,
> we can probably automate it using AI
> either now or in the near future.
~ Andrew Ng, [What AI can and can't do](https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now)
---
......@@ -538,7 +662,16 @@ https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now
$$ X \Rightarrow Y $$
<!-- C'est quoi une feature?
Le Feature Engineering, tout un art
-->
<br />
<br />
<br />
Voir [_A broad introduction to Feature Engineering_](https://medium.com/geekculture/a-broad-introduction-to-feature-engineering-ab27a9636f8a)
---
......@@ -559,14 +692,92 @@ Précision
![bg right:40% 90%](./img/01-precision.png)
<!-- Ça veut dire quoi une bonne précision ? -->
<!-- Est-ce qu'on peut être "trop précis" ? -->
---
Rappel (_Recall_ )
![bg right:40% 90%](./img/01-recall.png)
<!-- Quid d'un bon rappel? -->
<!-- Quid de "trop capturer" ? -->
<!-- Tradeoff! Idealement les deux -->
<!-- Depend du coup des erreurs:
- Labo pharma: faux pos? ok, faux neg? terrible
- Justice: faux neg? ok, faux pos? terrible
-->
---
Supervisé ou non
## Autres métriques ?
<br />
<br />
<br />
- Voir [[TDS] 20 Popular Metrics](https://towardsdatascience.com/20-popular-machine-learning-metrics-part-1-classification-regression-evaluation-metrics-1ca3e282a2ce)
<!--
Classification Metrics (accuracy, precision, recall, F1-score, ROC, AUC, …)
Regression Metrics (MSE, MAE)
Ranking Metrics (MRR, DCG, NDCG)
Statistical Metrics (Correlation)
Computer Vision Metrics (PSNR, SSIM, IoU)
NLP Metrics (Perplexity, BLEU score)
Deep Learning Related Metrics (Inception score, Frechet Inception distance)
-->
---
Supervisé ou non?
[Jeu]
<!-- Supervisé : trouvez la règle de mon jeu d'Eleusis (variante triplets de nombres.) -->
<!-- Non supervisé : clustering de genres de musiques -->
---
<!--
backgroundColor: black
color: white
-->
Exemples d'algos supervisés
---
- Linear Regression
- Decision trees
- Nearest Neighbor
---
Exemples d'algos non-supervisés
---
- Clustering, e.g. **K-means**
- Anomaly Detection
<!-- To detect anomalies, we have observations x1,. . . , xn ∈ X. The underlying presumption is, most of the data come from the same (unknown) distribution. We call it normalization in data. 
However, some observations come from a different distribution. They are considered anomalies. Several reasons can lead to these anomalies. 
The final task is to identify these anomalies by observing a concise description of the standard data so that divergent observations become outliers. -->
- Principal Component Analysis
<!-- Principal Component Analysis is an unsupervised learning algorithm. We use it for dimensionality reduction in machine learning. 
A statistical approach transforms the observations of correlated features into a collection of linearly uncorrelated components using orthogonal transformation. 
These new transformed features are known as the Principal Components. It is one of the most popular machine learning algorithms.
 PCA is used for exploratory data analysis and predictive modeling. It is a way to identify hidden patterns from the given dataset by lessening the variances. It follows a feature extraction technique. 
PCA usually tries to express the lower-dimensional surface to project the high-dimensional data. PCA determines the variance of each feature. 
The feature with high variance shows the excellent split between the classes and hence reduces the dimensionality.  -->
---
<!--
......@@ -601,6 +812,68 @@ His conclusion seems somewhat wrong to me as computer programs have no represent
> _1956' Dartmouth Summer Research Project on Artificial Intelligence_
<!--
In 1956, the summer workshop of Darmouth was held with prestigious participants (Minsky, Shannon, McCarthy, etc.).
This event marked the starting point of Artificial Intelligence as a field on its own.
The hopes were high: beating a chess champion, prove math theorems, replace human’s works. Following this meeting, Rosenblatt created the perceptron in 1958, a very simplified model of a biological neuron that aimed to classify images. Yes, “artificial neural networks” and “computer vision” aren’t new.
Despite these initial progresses, the field suffered from severe cutbacks in funding leading to the first AI winter in the 70s.
The expectation were too great compared to what was achieved.
This phenomenon occurred again in the 80s.
At this point, practical applications came mainly from other approaches than neural networks.
-->
---
### Perceptron
![](https://miro.medium.com/max/1100/1*v88ySSMr7JLaIBjwr4chTw.webp)
<!-- Image credit: https://towardsdatascience.com/multi-layer-neural-networks-with-sigmoid-function-deep-learning-for-rookies-2-bf464f09eb7f?gi=15a9a7836230 -->
---
<!-- _backgroundColor: -->
![bg fit 50%](https://upload.wikimedia.org/wikipedia/commons/8/8a/Perceptron_example.svg)
<!-- A diagram showing a perceptron updating its linear boundary as more training examples are added. -->
---
<!-- _backgroundColor:
_color: -->
## Multi-Layer Perceptron
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
![bg fit 80%](https://miro.medium.com/max/1100/1*CJEBy3GCaGQKNx7PEy-w5w.webp)
---
<!-- color: white -->
## Loss Function
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
![bg fit 70%](https://arthurdouillard.com/deepcourse/introduction/lossfunction.webp)
<!-- Image Credit: Arthur Douillard's Deep Learning Course -->
---
### Back to Dartmouth Summer of dreams
<!-- The study is to proceed on the basis of the conjecture that
every aspect of learning or any other feature of intelligence can in principle
be so precisely described
......@@ -610,13 +883,41 @@ An attempt will be made to find:
- how to make machines use language
- form abstractions and concepts
- solve kinds of problems now reserved for humans
- and improve themselves. -->
- and improve themselves.
-->
> We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
---
<!-- _backgroundColor:
_color:
-->
### (Gartner Hype Cycle)
<br />
###### ☝ 🔺🔺🔺🔺🔺🔺🔺🔺🔺 We're here
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
![bg fit 60%](https://upload.wikimedia.org/wikipedia/commons/thumb/9/94/Gartner_Hype_Cycle.svg/1920px-Gartner_Hype_Cycle.svg.png)
---
### AI Winter: 1984 et autres déceptions
# ❄🤖❄
<!-- . The AI winter was a result of such hype, due to over-inflated promises by developers, unnaturally high expectations from end-users, and extensive promotion in the media -->
<!-- Even today:
......@@ -640,19 +941,20 @@ Yann Le Cun travaille depuis les années 1980 sur l’apprentissage automatique
En 1987, Yann Le Cun rejoint l'Université de Toronto et en 1988 les laboratoires AT&T, pour lesquels il développe des méthodes d'apprentissage supervisé6.
Yann Le Cun s'intéresse ensuite à la conception des algorithmes de compression du format d'archivage DjVu7 puis la reconnaissance automatique de chèques bancaires6.
Yann Le Cun s'intéresse ensuite à la conception des algorithmes de compression du format d'archivage DjVu7 puis la reconnaissance automatique de chèques bancaires.
Yann Le Cun est professeur à l'université de New York où il a créé le Center for Data Sciences8,9. Il travaille notamment au développement technologique des voitures autonomes10.
Yann Le Cun est professeur à l'université de New York où il a créé le Center for Data Sciences. Il travaille notamment au développement technologique des voitures autonomes.
Le 9 décembre 2013, Yann Le Cun est invité par Mark Zuckerberg à rejoindre Facebook pour créer et diriger le laboratoire d'intelligence artificielle FAIR (« Facebook Artificial Intelligence Research ») à New York, Menlo Park et depuis 2015 à Paris, notamment pour travailler sur la reconnaissance d'images et de vidéos8,9. Il avait précédemment refusé une proposition similaire de la part de Google10.
Le 9 décembre 2013, Yann Le Cun est invité par Mark Zuckerberg à rejoindre Facebook pour créer et diriger le laboratoire d'intelligence artificielle FAIR (« Facebook Artificial Intelligence Research ») à New York, Menlo Park et depuis 2015 à Paris, notamment pour travailler sur la reconnaissance d'images et de vidéos. Il avait précédemment refusé une proposition similaire de la part de Google.
En 2016, Yann Le Cun est le titulaire pour l'année de la chaire « Informatique et sciences numériques » du Collège de France9.
En janvier 2018, Yann Le Cun quitte son poste de chef de division en recherche sur l'intelligence artificielle chez Facebook, au profit de Jérôme Pesenti, pour occuper un poste de chercheur en tant que scientifique en chef de l'IA toujours chez Facebook11.
En janvier 2018, Yann Le Cun quitte son poste de chef de division en recherche sur l'intelligence artificielle chez Facebook, au profit de Jérôme Pesenti, pour occuper un poste de chercheur en tant que scientifique en chef de l'IA toujours chez Facebook.
-->
![bg right ](https://www.controcorrenteblog.com/wp-content/uploads/2015/09/Yann-LeCun-1.jpg)
- [BackPropagation](http://yann.lecun.com/exdb/publis/pdf/lecun-89e.pdf)
- [Deep Learning](https://www.historyofdatascience.com/yann-lecun/)
---
......@@ -665,13 +967,66 @@ En janvier 2018, Yann Le Cun quitte son poste de chef de division en recherche s
---
### Geoffrey Hinton
![bg right](https://upload.wikimedia.org/wikipedia/commons/thumb/3/34/Geoffrey_Hinton_at_UBC.jpg/1024px-Geoffrey_Hinton_at_UBC.jpg)
<!--
Hinton was co-author of a highly cited paper published in 1986 that popularized the backpropagation algorithm for training multi-layer neural networks, although they were not the first to propose the approach.
Hinton is viewed as a leading figure in the deep learning community.
The dramatic image-recognition milestone of the AlexNet designed in collaboration with his students Alex Krizhevsky and Ilya Sutskever for the ImageNet challenge 2012 was a breakthrough in the field of computer vision.
Hinton received the 2018 Turing Award, together with Yoshua Bengio and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the "Godfathers of AI" and "Godfathers of Deep Learning",[25][26] and have continued to give public talks together.[27][28] -->
---
<!-- _backgroundColor: white
_color: black
_footer: '' -->
![bg 60%](https://upload.wikimedia.org/wikipedia/commons/thumb/c/cc/Comparison_image_neural_networks.svg/1024px-Comparison_image_neural_networks.svg.png)
<!-- AlexNet contained eight layers; the first five were convolutional layers, some of them followed by max-pooling layers, and the last three were fully connected layers.It used the non-saturating ReLU activation function, which showed improved training performance over tanh and sigmoid. -->
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
#### LeNet vs AlexNet | [Learn the difference](https://programmathically.com/deep-learning-architectures-for-image-classification-lenet-vs-alexnet-vs-vgg/)
---
### Andrew Ng
![bg fit right](https://upload.wikimedia.org/wikipedia/commons/a/a3/Andrew_Ng_WSJ_%282%29.jpg)
- deeplearning.ai
- "AI Optimist"
<!-- Ng was a co-founder and head of Google Brain and was the former chief scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people.
Ng is an adjunct professor at Stanford University (formerly associate professor and Director of its Stanford AI Lab or SAIL). Ng has also made substantial contributions to the field of online education as the co-founder of both Coursera and deeplearning.ai
Ng believes that AI technology will improve the lives of people, not that it is an anathema that will "enslave" the human race.[3] Ng believes the potential benefits of AI outweigh the threats, which he believes are exaggerated.[63] He has stated that
Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven't even landed on the planet yet![63]
A real threat is regarding the future of work: "Rather than being distracted by evil killer robots, the challenge to labour caused by these machines is a conversation that academia and industry and government should have."[64] A particular goal of Ng's work is to "democratize" AI learning so that people can learn more about it and understand its benefits. Ng's stance on AI is shared by Mark Zuckerberg, but opposed by Elon Musk.[64]
In 2017, Ng said he supported basic income to allow the unemployed to study AI so that they can re-enter the workforce. He has stated that he enjoyed Erik Brynjolfsson and Andrew McAfee's "The Second Machine Age" which discusses issues such as AI displacement of jobs.[65]
-->
---
### Eliezer Yudkowsky
- ###### [HELL OF AN] AI PESSIMIST
- [Machine Intelligence Research Institute _[intelligence.org]_](https://intelligence.org/research/)
- [LessWrong](lesswrong.com) & [Overcoming Bias](overcomingbias.com)
- AI Box Experiments
......@@ -680,8 +1035,8 @@ En janvier 2018, Yann Le Cun quitte son poste de chef de division en recherche s
---
<!--
_backgroundColor: white
color: #111
footer: ""
_color: #111
_footer: ""
-->
![bg 100%](./img/01-miri.png)
......@@ -689,8 +1044,8 @@ footer: ""
---
<!--
_backgroundColor: white
color: #111
footer: ""
_color: #111
_footer: ""
-->
## LessWrong <3
......@@ -704,6 +1059,42 @@ footer: ""
---
Conclusion:
## On résume
<!-- Tour de table : un truc que tu as appris aujourd'hui -->
<!-- Main levée : un truc qui nous a surpris ? -->
<!--
Main levée : qui a une grosse question ?
Un blind spot dans sa carte mentale ?
-->
<!-- Préparons la suite :
Tour de table - c'est quoi le plus important pour vous que vous voulez apprendre des TPs ? -->
---
### Conclusion:
> la route est _longue_, mais la voie est _libre_
![bg right 90%](https://miro.medium.com/max/640/0*JWCLdKhz-0e_77tB.webp)
<!-- Image Credit: Steward Brandt - Whole Earth Catalog -->
<!-- This is the DREAM TIME! Nous sommes la première génération à être limités, non pas par les possibilités techniques, mais par notre imagination 🤩 -->
<!-- "Gros gros potentiel" -->
<!-- -> Grosses grosses responsabilités -->
<!-- Bon, ok, mais en pratique ?
- Comment on choisit un modèle selon le problème à résoudre ?
- Comment on entraine un modèle ?
- Comment on teste sa qualité ? Ses biais ?
- Comment on peut build on top ? Le déployer ? Collaborer à plusieurs ?
---
\ No newline at end of file
On se retrouve l'année prochaine pour voir tout ça ensemble :)
-->
\ No newline at end of file
......@@ -22,8 +22,11 @@ Objectifs :
- Découvrir ses limites avec quelques gros fails
---
Format: Rendu écrit (fichier Markdown ou Doc)
Format: Rendu écrit (fichier Markdown ou Doc avec une section par _Level_)
Sur l'intranet ou à formation@nech.pl
<br />
**DEADLINE : 15 Décembre 23:59:59**
<br />
......@@ -32,7 +35,36 @@ Sur l'intranet ou à formation@nech.pl
---
## Lvl 0: Jouer avec des LMs
## Lvl 0: La base
![bg right:35% w:300](https://www.meme-arsenal.com/memes/a6effdba5a540560c7b5ee616ee0f1f3.jpg)
<!-- Image credit: World of Warcraft Tutorial boar -->
###### Faites une phrase avec vos propres mots pour définir ce que veut dire:
- "Apprendre"
- "Deep Learning"
- "Précision et Rappel"
- "Overfit"
---
## Lvl 0.1: Bonus - dans la vraie vie
- Réfléchissez à un exemple de produit ou service qui est "trop précis" : quel est le problème que ça pose à l'utilisateur ?
- Même question pour un "trop haut rappel" : quel problème?
---
## Lvl 1: Les pionniers
Parmi les figures historiques évoquées au premier cours,
choisissez en une.
Lisez un peu sur cette personne, puis partagez ici quelque-chose qu'elle a dit ou fait que vous, personnellement, trouvez intéressant.
---
## Lvl 2: Jouer avec des LMs 🕹
<br />
<br />
......@@ -42,4 +74,29 @@ Sur l'intranet ou à formation@nech.pl
- Une phrase de son output que vous trouvez risible _(expliquez pourquoi : qu'est-ce qu'il a si mal compris ?)_
---
\ No newline at end of file
---
<!-- _footer: '' -->
## Lvl 2.1: Bonus - Testez OpenAI Codex
- Créez votre compte sur OpenAI
- Testez OpenAI Codex : écrivez la signature et la docstring d'une fonction qui remplit un besoin simple (valider qu'un username correspond à X, valider qu'un nombre a telle propriété, etc), puis laissez le modèle générer son code.
- Partagez le code généré, et en quelques mots votre opinion : quelles forces ? quelles faiblesses ? quels avantages ou risques à s'en servir au boulot ?
---
### Lvl 3: Jouer avec des Image Models
- Ouvrez "ThisPersonDoesNotExist" et dites en quelques mots votre impression sur la qualité des images générées.
- Ouvrez "ThisXDoesNotExist", choisissez un autre modèle, et commentez sa qualité.
---
### Lvl3 Bonus: Votre propre génération d'image
Utilisez MidJourney, CrAIyon (Dall-E Mini), StableDiffusion ou DeepDreamGenerator pour créer une image basée sur un prompt textuel ou sur du style transfer.
Incluez votre résultat préféré dans ce rendu et commentez :
- Où-est-ce que le modèle excelle ?
- Où-est-ce que le modèle échoue ?
- Avez vous essayé un autre sujet d'abord, que le modèle a échoué à réaliser ?
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment