NeurIPS Special – more than ChatGPT – Industrial AI Podcast

We talk about ChatGPT and why Deep Learnings s***s, what new pradigms have been introduced by Geoff Hinton and Yann LeCun and what he thinks of the developments. Our guest is Prof. Dr. Günter Klambauer from JKU Linz.

Amongst others about researchers looking for new learning paradigms, like Yann LeCun suggesting energy based models that are regularized and focus on positive pairs and Geoff Hinton proposing two forward passes as a more efficient algorithm than backpropagation.

The podcast is growing and we want to keep growing. That’s why our German-language podcast is now available in English. We are happy about new listeners.

We thank our new partner [Siemens]()

Our guest:

Questions? or


This podcast is supported by Siemens Your partner for industrial grade AI [Music] Hi there welcome to a new episode of the Industrial AI podcast my name is Peter Sieberg and today all I will do is Introduce you to a new podcast interview Which Robert did the other day with Gunter clumbawa Gunter is associate Professor for AI in a life science at Johannes Kepler University in Lin’s Austria which we have heard of before as It is the University where also zap hog Rider resides go to visit it the new Ribs conference in New Orleans a week or Two ago and he shares an update of the Highlights from the conference with Robert Next week you will hear Robert and me Again providing you with the latest Greatest news on Industrial AI for now Enjoy listening Hello Gunter Welcome to our podcast Hello Robert thanks for inviting me I’m Happy to join this podcast Gunter Introduce yourself to the listeners Briefly three sentences yes so my name Is Kunta kalamba I’m a university Professor for artificial intelligence in Life sciences and I’m doing research in The area of machine learning and machine Learning applied to problems in Molecular biology chemistry and other Life sciences we are normally not

Focused on life science but today we Invited you because you were in New Orleans and how did you like it yes this Was a blast again and I’m very happy That this was a conference again in a Non-virtual format where we could Physically go I’ve met a lot of great People lots of machine learning experts Renowned experts joined this conference He visited even our poster and discussed With us that was great tell something About these poster stuff because this is Famous for this conference I think yes So I think the poster sessions at new Ribs are much more prominent than at Other conferences so it’s a huge success If you get to present the poster there Are close to 5 000 papers submitted and Only a thousand are accepted or less ten Thousand are accepted for poster Presentation and so here you have Several huge poster presentation which Means there’s a big Haul full of Hundreds of posters and that each poster There is a researcher or a team of Researchers trying to present this and Yeah people are walking through these Poster Halls like me and then the Interesting post as I approach and then The authors of this poster they present And explain what they did in their Research and of course also we were also Presenting one of our research works as A poster and of course we were very

Happy that yanli Kun and also many Others visited our posters and yeah we Could present them our work it’s not Allowed to use a monitor right yes you Are allowed but it’s difficult to bring Monitors so there was a particular Poster they had a printed poster a huge One with cut outs and there they applied I think eight tablets okay let’s come Back to our topic did you like it and Which Technologies were the focus on and Why yes I liked it a lot and I think There was much more interaction again And and a lot of science going on in Discussions and the current technologies That I heavily discussed are of course Large language models and text image Models besides some other techniques That I will maybe have the chance to Talk later and the large language models Are so for example these GPD and chat GPD this dropped during new ribs these Are models that can simulate natural Language they are able to to write Meaningful sentences or answer to Questions and they are heavily discussed Not only technologically but also with Their implications for technology and Society you mentioned cat GPT what is Your opinion on chat GPT for me it’s Like other large language models it is Very good at producing text that is Plausible but it’s not very good at and Yeah for example like reasoning or

Memorizing stuff so it’s something like Pretending it’s a person that pretends Like to be involved in these issues and So on and you can easily trick it and it Makes a lot of errors but from the Scientific perspective to get to this Level of having a machine learning model A neural network producing text of that Quality is of course a great success but It’s completely over exaggerated for Example to mention that the these large Language models have any form of Consciousness but that’s maybe a topic For the research in the future yes of Course so there was a talk by David Chalmers only focused on finding out or Shedding the light on whether large Language models could be conscious and Unconsciousness from different sides so I think putting that aside the large Language models in science and research They will continue to be relevant people Will come with other models So currently These are mostly Transformer based Whatever that is and they are get better And better when you scale them up Although the first research Works Already show that we have diminishing Returns on scaling so if you make them Larger and larger you get improved Quality but by making the tablet larger So you see a diminishing returns so it Doesn’t improve as much as before and Also step pohreta one of the inventors

Of one of the first so before the Transformer based language models we had Lstm based language models and the Pohrata is the inventor of the lstms he Gave a very exciting talk and he Commented also on large language models And said that they within their millions Billions of parameters they memorize Things and texts explicitly like Addresses which is stupid and this went So far that even said deep learning Sucks because of learning because of This uh because of this basically wrong Usage The increased parameter or the Large language model where the word Large is used means that they have so Many adjustable weights adjustable Parameters They used it to store text Parts Basically and there should be other Mechanisms to store and retrieve Individual text Parts but that’s yeah We’re diving into science now yeah but That’s very interesting so chat GPT and Large language models okay one very Important topic what else is going to Yeah I would say I’m still in connection A bit with that so text image models Meaning you have a text prompt that Generates an image and yeah there are Also many models doing that I think it’s Related but people are thinking now how Can can we calculate how good this Generated images are and and thinking

About such things but there’s a huge Other Trend aside from that and which is About new learning paradigms that is for Example one of the invited talks was by Jeff Hinton and he proposed the forward Forward algorithm and this is basically A method to train your so we say the Large main language models there neural Networks and we we say we train them so We have a huge amounts of data and on This data we train them meaning that we Adjust their parameters And for a long time there was a single Training Paradigm saying that we have Inputs And we have labels for that imagine you Have database of images or a million of Images and for each image you know the Image class like cats dog ship airplane And so on and then you train your neural Network to respond with the correct Image label or label in general and this Is so-called supervised learning very Standard supervised learning and the Weights were adjusted by back Propagation so that is a mathematical Technique taking the derivative of So-called objective function and then at The slow gradually adapting the weights And Chaffin said that the spec Propagation algorithm this cannot be the Thing that is done in our brain by our Neurons and we should have other more Efficient things that learn differently

And so he said not before we did forward Pass and then a backward pass we should Do two forward passes and it’s a bit Technical but he explained how by that This can be learned but not only he is Looking for new learning paradigms or The other He suggests the energy-based models that Are regularized and focused on positive Pairs and so on so people are looking For new objective functions for new ways To generate supervised data which we Call self-supervised learning and people Are looking for so-called contrastive Learning objectives so these are new Learning paradigms that sounds a little Bit like reinforcement or I’m wrong yeah No reinforcement learning I think it’s It’s I mean it’s related of course but This is still without an environment With which you can interact so in Reinforcement learning questions so you Always have an agent that can interact Within a environment so think of a Computer player playing a game or a Chess player playing the game so the Game is the environment you do something Then you get some feedback but here We’re still talking about just you get The data set and you have to learn Something of that and it’s also right That we should change our learning Paradigms because we humans also we Learn a lot unsupervised so we when we

When and babies learn something they’re Not always told by their parents this is A this is a dog this is a tree this is a Table or whatever but they learn a lot By just observing and that’s currently Not and that’s very efficient and that’s Not how are the neural networks how the Large language models currently learn And therefore we could probably improve Ai’s machine learning by coming up with New learning paradigms that’s very Interesting because my my co-host Peter Always has that reinforcement learning Is very near how humans are learning and You say no no that’s a different Perspective right yes because humans I Don’t think we learn a lot by trial and Error so you you do you go around and You I don’t know you try something and You fail and then you adapt your Strategy but that’s not the usual I Think we have a lot of knowledge about The World by just looking at it if you We’ve seen objects from different Perspectives we know this belongs Together we see baby see someone see They don’t know even how to move their Hair ends but then they move the hand And this hand moves another object and Then they have chuffed whether it’s good Or bad they they just I don’t know maybe They intrinsically generate some reward But still we’re just observing without Any strong feedback this was good or bad

We know how things work and which things Belong together and what are objects I Think this observe is very important Because we learn a lot why are we Observing a situation also you know and Also temporarily for example we think That temporarily are close together so We think often that this belongs Together if I move this a glass on the Table and suddenly dropped this must Have been a must have something to do With my moving off the glass then Suddenly it started to drop so I think Yeah it’s not only reinforcement Learning definitely what will this Approach change I think it’s a bunch of Approaches and it’s unclear at this Point which new learning Paradigm will Take over 2021 I always say is the year Of contrastive learning contrast if loss There were many success for example this Contrastive learning can you explain for This what is this contrastive learning From last year for example the clip Algorithm this was the first time you Could embed text and language in the Same embedding space and this allowed For this all these text image models Where you can start a text prompt and Generate an image and this was allowed By contrastive learning from last year And this is already changing a lot so How people interact with the AIS is now Enabled by text because we have a common

Embedding space of images and text so This has already changed a lot new Learning Paradigm contrastive Learning Has already changed a lot and will Continue to do so it could be so if I’m I’m very sure that this would allow in Many ways that humans interact with AIS By natural language meaning to tell the AI but just in text to this and this but Also like the output of an AI system Maybe so the AI does something steer Something or find something out and it’s Often hard how it’s often completely Different coded in the AI what it does For example which actually takes but at The same time now AIS might be able to Write out the small text saying okay I Now changed my strategy I went over There because I saw this in this so all These interactions with AI systems with Machine learning systems but also other AI systems could now be enabled with Having text and natural language as as Interface basically due to that Everything is very very interesting now My question because our name is Industrial AI podcast we are talking About industrial AI what can we learn From In from from this approaches in the Industry so I think industry is also one Of the main profitors I think from of New AI systems so I think this would Help a lot by interacting in industry For example is text and having natural

Languages interface I think there is a Lot of improvement and in general Improvements for example in image Recognition text I mean images are can Be taken anywhere in any for for example Produce some products and you can check Them by a camera and then find potential Problems this has improved also now with The new architectures for image Recognition but I think beyond that I Think we will have and this is another Trend I wasn’t able to talk about we Will have a kind of I would say Bilateral AI systems what was the name Yeah I call it bilateral you could see Neurosymbolic AIS there’s a lot of work On how machine learning can interact With symbolic components with discrete Structures and I think there are a lot Of already great for example symbolic AI So symbolic Czech symbolic procedures in Place in industry but in another at full Capacity and now there’s a lot of new Research going on how to combine machine Learning with symbolic Ai and this is Another exciting trend that I can now Mention and this uh will also have have An effect on on industry where these Currently established maybe some Symbolic AI systems are already running And they can be empowered by Machine Learning Systems I highly recommend our Episode with festo that’s a German Automation company maybe you know it and

They are working exactly on this topic To combine these two worlds or is it one World we don’t know would say it’s one World and not two worlds what is your Opinion is it one world or two words It’s one word but currently these two Scientific areas are two worlds or Mostly Two Worlds so I know that so also Historically this AI symbolic AI Research community and the Machine Learning communities were quite a part And you also see that because we started With talking about new ribs it’s neural Information processing system that was About neural networks and machine Learning and you couldn’t see or you Could hardly see any symbolic Ai and Then also the simple quality I Personally have their researchers they Have their own conferences so in the Next years in order to advance to a new Level of AI systems these two Communities have to get together and Form AI systems that work in this one World that needs both and in that Connection I also want to mention that There is a lot of research work and the Research Trend are going on on causality So people try to develop models that are Able to find causal reasons for people For for things happening and to identify Causal features and this is a huge Trend That also will affect industry I think Let’s come back to the conference what

Did you present what was your topic yes It’s about the contrastive learning Objective and Paradigm so we this famous Clip algorithm that changed everything From 2021 it feels very old now actually Although it’s only a year old So this clip model contrastive language English image pre-training I talked Before this enabled to embed natural Language and images in the same Embedding space and allowed for this Text prompt page generation we improved This strongly by our method Club Sorry a contrastive leaf one Outpost and Yeah it’s a bit technical Basically we use a memory based system To enrich so you have an example you Think of an image and before you compare It with the text which is done by clip You first enrich the image by looking at Other images for example you have an Image of let’s say a horse and then you Look at other pictures of horses and see What is what is common uh for horses and How do horses look on other images and Only then you compare with the text and This gave a strong improvement over the Clip algorithm that’s very interesting And how was the reaction of the visitors Yes so overall we got extremely good Feedback the post was crowded for more Than two hours of the post session also Yan Le [ __ ] was there and and was very Interested he was thinking about

Something very similar as we did Actually two changes to the clip Algorithm and one of those changes with This objective function whatever that is Uh he he also had suggested in one of His papers almost simultaneously and Therefore he was very interested Although in general he gave also a talk Jan Lee Koon gave also a talk and said The community should abandon contrastive Learning methods for regularized methods Also one of a statement that is very Broadly discussed and I do not support Actually but yeah this is a scientific Opinion of course and that’s also very Very valuable so it was a great Conference it was overall it was a great Exciting conference and you weren’t sick Because many had a spoiled their stomach I I did not get covered or any other Diseases at least I’m fine although my flight got canceled So I had some troubles coming back I’m Here I’m back in Austria now it was a Pleasure to talk to you thank you very Much thanks thanks for that [Music] Thank you [Music]

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *