Artificial intelligence (AI) has rapidly evolved from its rudimentary beginnings into complex systems capable of impressive feats. Yet, it tends to stumble when it comes to creating new knowledge and definitions. AI's abilities, and possibly also its limitations, run parallel to how we interpret and understand the world around us.

This blog delves into the debates swirling around AI, its capacities, and its potential role in knowledge creation based on Human Compatible  by Stuart Russel.

AI and the Challenge of Defining New Knowledge

Defining new knowledge is an area that AI currently struggles with. Inductive logic programming could allow AI to create new definitions, but even this can lead to an infinite spiral of increasingly complex definitions. Feature engineering has so far proven the most effective way of feeding information into AI systems. However, these are better suited to narrow, specific inputs (e.g., traffic congestion or pixel data in images), rather than broad, generalized datasets.

The Drawbacks of Deep Learning

Deep learning, often hailed as a game-changer in AI, also presents certain shortcomings. There's a general lack of boundaries, making it difficult to build concepts sequentially. While this allows for a high degree of refinement and optimization, it also removes the platform for logical inference. This leads us to question, do we need to develop boundaries for AI to foster logical inference or should we rely more on experimental observation?

Rethinking the Definition of AI

The standard definition of AI and intelligence tends to be very human-centric. If the ultimate goal is utility-based, we might not need to understand the full inner workings of the AI. All we would care about is the final result, akin to a black box output. The history of human technological advancement, such as flying without the knowledge of aerodynamics, indeed suggests the possibility of success without understanding.

The Role of Traditional Programming in AI

Even though deep learning has its strengths, traditional programming still has a vital role to play in AI, particularly in deductive reasoning and prediction. Traditional programming can help the broader, less specific capabilities of deep learning by connecting the dots and aiding in the interpretation of data. The advent of large scale attention models, which were not mentioned in the source book, highlight how scaling up AI models has shown promise.

Building an AI that Understands Common Sense

Deep learning is particularly adept at finding the 'nearest neighbors' of a concept, which can be effectively likened to common sense. The idea of using human agents to assist in conceptualizing the output of AI can be significantly beneficial in knowledge creation and prediction.

Our understanding of AI is ever-evolving, but one thing remains clear. The potential of AI is intimately tied to our perspective of what we define as intelligence and knowledge. Whether we come to a collective agreement or not, AI stands as a testament to human ingenuity, in our constant quest for better ways to understand and interact with our world.