Course Launch Community Event

-


Sylvain Gugger's avatar


We’re excited to share that after lots of work from the Hugging Face team, part 2 of the Hugging Face Course will probably be released on November fifteenth! Part 1 focused on teaching you methods to use a pretrained model, fine-tune it on a text classification task then upload the result to the Model Hub. Part 2 will concentrate on all the opposite common NLP tasks: token classification, language modeling (causal and masked), translation, summarization and query answering. It’s going to also take a deeper dive in the entire Hugging Face ecosystem, specifically 🤗 Datasets and 🤗 Tokenizers.

To go along with this release, we’re organizing a big community event to which you might be invited! This system includes two days of talks, then team projects focused on fine-tuning a model on any NLP task ending with live demos like this one. Those demos will go nicely in your portfolio when you are on the lookout for a brand new job in Machine Learning. We will even deliver a certificate of completion to all of the participants that achieve constructing certainly one of them.

AWS is sponsoring this event by offering free compute to participants via Amazon SageMaker.


To register, please fill out this kind. You will see below more details on the 2 days of talks.



Day 1 (November fifteenth): A high-level view of Transformers and methods to train them

The primary day of talks will concentrate on a high-level presentation of Transformers models and the tools we will use to coach or fine-tune them.

Thomas Wolf: Transfer Learning and the birth of the Transformers library

Thomas Wolf is co-founder and Chief Science Officer of HuggingFace. The tools created by Thomas Wolf and the Hugging Face team are used across greater than 5,000 research organisations including Facebook Artificial Intelligence Research, Google Research, DeepMind, Amazon Research, Apple, the Allen Institute for Artificial Intelligence in addition to most university departments. Thomas Wolf is the initiator and senior chair of the biggest research collaboration that has ever existed in Artificial Intelligence: “BigScience”, in addition to a set of widely used libraries and tools. Thomas Wolf can also be a prolific educator and a thought leader in the sphere of Artificial Intelligence and Natural Language Processing, a daily invited speaker to conferences all all over the world (https://thomwolf.io).

Margaret Mitchell: On Values in ML Development

Margaret Mitchell is a researcher working on Ethical AI, currently focused on the ins and outs of ethics-informed AI development in tech. She has published over 50 papers on natural language generation, assistive technology, computer vision, and AI ethics, and holds multiple patents within the areas of conversation generation and sentiment classification. She previously worked at Google AI as a Staff Research Scientist, where she founded and co-led Google’s Ethical AI group, focused on foundational AI ethics research and operationalizing AI ethics Google-internally. Before joining Google, she was a researcher at Microsoft Research, focused on computer vision-to-language generation; and was a postdoc at Johns Hopkins, focused on Bayesian modeling and knowledge extraction. She holds a PhD in Computer Science from the University of Aberdeen and a Master’s in computational linguistics from the University of Washington. While earning her degrees, she also worked from 2005-2012 on machine learning, neurological disorders, and assistive technology at Oregon Health and Science University. She has spearheaded quite a lot of workshops and initiatives on the intersections of diversity, inclusion, computer science, and ethics. Her work has received awards from Secretary of Defense Ash Carter and the American Foundation for the Blind, and has been implemented by multiple technology firms. She likes gardening, dogs, and cats.

Jakob Uszkoreit: It Ain’t Broke So Don’t Fix Let’s Break It

Jakob Uszkoreit is the co-founder of Inceptive. Inceptive designs RNA molecules for vaccines and therapeutics using large-scale deep learning in a decent loop with high throughput experiments with the goal of constructing RNA-based medicines more accessible, simpler and more broadly applicable. Previously, Jakob worked at Google for greater than a decade, leading research and development teams in Google Brain, Research and Search working on deep learning fundamentals, computer vision, language understanding and machine translation.

Jay Alammar: A mild visual intro to Transformers models

Jay Alammar, Cohere. Through his popular ML blog, Jay has helped thousands and thousands of researchers and engineers visually understand machine learning tools and ideas from the essential (ending up in numPy, pandas docs) to the cutting-edge (Transformers, BERT, GPT-3).

Matthew Watson: NLP workflows with Keras

Matthew Watson is a machine learning engineer on the Keras team, with a concentrate on high-level modeling APIs. He studied Computer Graphics during undergrad and a Masters at Stanford University. An almost English major who turned towards computer science, he’s keen about working across disciplines and making NLP accessible to a wider audience.

Chen Qian: NLP workflows with Keras

Chen Qian is a software engineer from Keras team, with a concentrate on high-level modeling APIs. Chen got a Master degree of Electrical Engineering from Stanford University, and he is very fascinated about simplifying code implementations of ML tasks and large-scale ML.

Mark Saroufim: Train a Model with Pytorch

Mark Saroufim is a Partner Engineer at Pytorch working on OSS production tools including TorchServe and Pytorch Enterprise. In his past lives, Mark was an Applied Scientist and Product Manager at Graphcore, yuri.ai, Microsoft and NASA’s JPL. His primary passion is to make programming more fun.



Day 2 (November sixteenth): The tools you’ll use

Day 2 will probably be focused on talks by the Hugging Face, Gradio, and AWS teams, showing you the tools you’ll use.

Lewis Tunstall: Easy Training with the 🤗 Transformers Trainer

Lewis is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the broader community. He can also be a co-author of an upcoming O’Reilly book on Transformers and you’ll be able to follow him on Twitter (@_lewtun) for NLP suggestions and tricks!

Matthew Carrigan: Latest TensorFlow Features for 🤗 Transformers and 🤗 Datasets

Matt is answerable for TensorFlow maintenance at Transformers, and can eventually lead a coup against the incumbent PyTorch faction which is able to likely be co-ordinated via his Twitter account @carrigmat.

Lysandre Debut: The Hugging Face Hub as a way to collaborate on and share Machine Learning projects

Lysandre is a Machine Learning Engineer at Hugging Face where he’s involved in lots of open source projects. His aim is to make Machine Learning accessible to everyone by developing powerful tools with a quite simple API.

Sylvain Gugger: Supercharge your PyTorch training loop with 🤗 Speed up

Sylvain is a Research Engineer at Hugging Face and certainly one of the core maintainers of 🤗 Transformers and the developer behind 🤗 Speed up. He likes making model training more accessible.

Lucile Saulnier: Get your personal tokenizer with 🤗 Transformers & 🤗 Tokenizers

Lucile is a machine learning engineer at Hugging Face, developing and supporting using open source tools. She can also be actively involved in lots of research projects in the sphere of Natural Language Processing reminiscent of collaborative training and BigScience.

Merve Noyan: Showcase your model demos with 🤗 Spaces

Merve is a developer advocate at Hugging Face, working on developing tools and constructing content around them to democratize machine learning for everybody.

Abubakar Abid: Constructing Machine Learning Applications Fast

Abubakar Abid is the CEO of Gradio. He received his Bachelor’s of Science in Electrical Engineering and Computer Science from MIT in 2015, and his PhD in Applied Machine Learning from Stanford in 2021. In his role because the CEO of Gradio, Abubakar works on making machine learning models easier to demo, debug, and deploy.

Mathieu Desvé: AWS ML Vision: Making Machine Learning Accessible to all Customers

Technology enthusiast, maker on my free time. I like challenges and solving problem of clients and users, and work with talented people to learn daily. Since 2004, I work in multiple positions switching from frontend, backend, infrastructure, operations and managements. Try to resolve commons technical and managerial issues in agile manner.

Philipp Schmid: Managed Training with Amazon SageMaker and 🤗 Transformers

Philipp Schmid is a Machine Learning Engineer and Tech Lead at Hugging Face, where he leads the collaboration with the Amazon SageMaker team. He’s keen about democratizing and productionizing cutting-edge NLP models and improving the convenience of use for Deep Learning.



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x