The document discusses problem-solving agents and their approach to solving problems. Problem-solving agents (1) formulate a goal based on the current situation, (2) formulate the problem by defining relevant states and actions, and (3) search for a solution by exploring sequences of actions that lead to the goal state. Several examples of problems are provided, including the 8-puzzle, robotic assembly, the 8 queens problem, and the missionaries and cannibals problem. For each problem, the relevant states, actions, goal tests, and path costs are defined.
This document discusses various heuristic search algorithms including generate-and-test, hill climbing, best-first search, problem reduction, and constraint satisfaction. Generate-and-test involves generating possible solutions and testing if they are correct. Hill climbing involves moving in the direction that improves the state based on a heuristic evaluation function. Best-first search evaluates nodes and expands the most promising node first. Problem reduction breaks problems into subproblems. Constraint satisfaction views problems as sets of constraints and aims to constrain the problem space as much as possible.
This document discusses mobile robot kinematics. It begins by introducing the challenges of mobile robot motion estimation due to robots moving unbounded in their environment. It then covers wheel kinematic constraints and models for different wheel types. The document discusses how to represent a robot's position and derives forward and inverse kinematic models. It also covers mobile robot maneuverability in terms of degrees of mobility and steerability. The document concludes by discussing path and trajectory considerations as well as motion control approaches for mobile robots.
A humanoid robot is a robot with its body shape built to resemble that of the human body. A humanoid design might be for functional purposes, such as interacting with human tools and environments, for experimental purposes, such as the study of bipedal locomotion, or for other purposes. In general, humanoid robots have a torso, a head, two arms, and two legs, though some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robots may also have heads designed to replicate human facial features such as eyes and mouths. Androids are humanoid robots built to aesthetically resemble humans.
Purpose
Humanoid robots are used as a research tool in several scientific areas.
Researchers need to understand the human body structure and behavior (biomechanics) to build and study humanoid robots. On the other side, the attempt to the simulation of the human body leads to a better understanding of it.
Human cognition is a field of study which is focused on how humans learn from sensory information in order to acquire perceptual and motor skills. This knowledge is used to develop computational models of human behavior and it has been improving over time.
It has been suggested that very advanced robotics will facilitate the enhancement of ordinary humans. See transhumanism.
Although the initial aim of humanoid research was to build better orthosis and prosthesis for human beings, knowledge has been transferred between both disciplines. A few examples are: powered leg prosthesis for neuromuscularly impaired, ankle-foot orthosis, biological realistic leg prosthesis and forearm prosthesis.
Besides the research, humanoid robots are being developed to perform human tasks like personal assistance, where they should be able to assist the sick and elderly, and dirty or dangerous jobs. Regular jobs like being a receptionist or a worker of an automotive manufacturing line are also suitable for humanoids. In essence, since they can use tools and operate equipment and vehicles designed for the human form, humanoids could theoretically perform any task a human being can, so long as they have the proper software. However, the complexity of doing so is deceptively great.
They are becoming increasingly popular for providing entertainment too. For example, Ursula, a female robot, sings, play music, dances, and speaks to her audiences at Universal Studios. Several Disney attractions employ the use of animatrons, robots that look, move, and speak much like human beings, in some of their theme park shows. These animatrons look so realistic that it can be hard to decipher from a distance whether or not they are actually human. Although they have a realistic look, they have no cognition or physical autonomy. Various humanoid robots and their possible applications in daily life are featured in an independent documentary film called Plug & Pray, which was released in 2010 it continue.....
The document discusses human activity recognition from video data using computer vision techniques. It describes recognizing activities at different levels from object locations to full activities. Basic activities like walking and clapping are the focus. Key steps involve tracking segmented objects across frames and comparing motion patterns to templates to identify activities through model fitting. The DEV8000 development kit and Linux are used to process video and recognize activities in real-time. Applications discussed include surveillance, sports analysis, and unmanned vehicles.
This robot uses IR sensors to detect edges and avoid falling off platforms. It has two IR sensors, one on each side, that detect light reflecting off the platform. When an edge is detected, the comparator sends a signal to the microcontroller, which makes the robot turn away from the edge by only powering the motor on the opposite side. This allows the robot to continuously move around the platform without falling.
This document discusses various heuristic search algorithms including generate-and-test, hill climbing, best-first search, problem reduction, and constraint satisfaction. Generate-and-test involves generating possible solutions and testing if they are correct. Hill climbing involves moving in the direction that improves the state based on a heuristic evaluation function. Best-first search evaluates nodes and expands the most promising node first. Problem reduction breaks problems into subproblems. Constraint satisfaction views problems as sets of constraints and aims to constrain the problem space as much as possible.
This document discusses mobile robot kinematics. It begins by introducing the challenges of mobile robot motion estimation due to robots moving unbounded in their environment. It then covers wheel kinematic constraints and models for different wheel types. The document discusses how to represent a robot's position and derives forward and inverse kinematic models. It also covers mobile robot maneuverability in terms of degrees of mobility and steerability. The document concludes by discussing path and trajectory considerations as well as motion control approaches for mobile robots.
A humanoid robot is a robot with its body shape built to resemble that of the human body. A humanoid design might be for functional purposes, such as interacting with human tools and environments, for experimental purposes, such as the study of bipedal locomotion, or for other purposes. In general, humanoid robots have a torso, a head, two arms, and two legs, though some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robots may also have heads designed to replicate human facial features such as eyes and mouths. Androids are humanoid robots built to aesthetically resemble humans.
Purpose
Humanoid robots are used as a research tool in several scientific areas.
Researchers need to understand the human body structure and behavior (biomechanics) to build and study humanoid robots. On the other side, the attempt to the simulation of the human body leads to a better understanding of it.
Human cognition is a field of study which is focused on how humans learn from sensory information in order to acquire perceptual and motor skills. This knowledge is used to develop computational models of human behavior and it has been improving over time.
It has been suggested that very advanced robotics will facilitate the enhancement of ordinary humans. See transhumanism.
Although the initial aim of humanoid research was to build better orthosis and prosthesis for human beings, knowledge has been transferred between both disciplines. A few examples are: powered leg prosthesis for neuromuscularly impaired, ankle-foot orthosis, biological realistic leg prosthesis and forearm prosthesis.
Besides the research, humanoid robots are being developed to perform human tasks like personal assistance, where they should be able to assist the sick and elderly, and dirty or dangerous jobs. Regular jobs like being a receptionist or a worker of an automotive manufacturing line are also suitable for humanoids. In essence, since they can use tools and operate equipment and vehicles designed for the human form, humanoids could theoretically perform any task a human being can, so long as they have the proper software. However, the complexity of doing so is deceptively great.
They are becoming increasingly popular for providing entertainment too. For example, Ursula, a female robot, sings, play music, dances, and speaks to her audiences at Universal Studios. Several Disney attractions employ the use of animatrons, robots that look, move, and speak much like human beings, in some of their theme park shows. These animatrons look so realistic that it can be hard to decipher from a distance whether or not they are actually human. Although they have a realistic look, they have no cognition or physical autonomy. Various humanoid robots and their possible applications in daily life are featured in an independent documentary film called Plug & Pray, which was released in 2010 it continue.....
The document discusses human activity recognition from video data using computer vision techniques. It describes recognizing activities at different levels from object locations to full activities. Basic activities like walking and clapping are the focus. Key steps involve tracking segmented objects across frames and comparing motion patterns to templates to identify activities through model fitting. The DEV8000 development kit and Linux are used to process video and recognize activities in real-time. Applications discussed include surveillance, sports analysis, and unmanned vehicles.
This robot uses IR sensors to detect edges and avoid falling off platforms. It has two IR sensors, one on each side, that detect light reflecting off the platform. When an edge is detected, the comparator sends a signal to the microcontroller, which makes the robot turn away from the edge by only powering the motor on the opposite side. This allows the robot to continuously move around the platform without falling.
The document discusses various path planning techniques for mobile robots to navigate between a starting point and destination while avoiding collisions. It describes methods like visibility graphs, roadmaps, cell decomposition, and potential fields. It also covers implementing techniques like breadth-first search on visibility graphs and optimizing robot trajectories using factors like travel time, distance and sensor information.
This document summarizes the n-queen problem, which involves placing N queens on an N x N chessboard so that no queen can attack any other. It describes the problem's inputs and tasks, provides examples of solutions for different board sizes, and outlines the backtracking algorithm commonly used to solve this problem. The backtracking approach guarantees a solution but can be slow, with complexity rising exponentially with problem size. It is a good benchmark for testing parallel computing systems due to its iterative nature.
Knowledge representation and reasoning (KR) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language
Problem Characteristics in Artificial Intelligence,
Unit -2 Problem Solving and Searching Techniques
o choose an appropriate method for a particular problem first we need to categorize the problem based on the following characteristics.
Is the problem decomposable into small sub-problems which are easy to solve?
Can solution steps be ignored or undone?
Is the universe of the problem is predictable?
Is a good solution to the problem is absolute or relative?
Is the solution to the problem a state or a path?
What is the role of knowledge in solving a problem using artificial intelligence?
Does the task of solving a problem require human interaction?
1. Is the problem decomposable into small sub-problems which are easy to solve?
Can the problem be broken down into smaller problems to be solved independently?
See also Water Jug Problem in Artificial Intelligence
The decomposable problem can be solved easily.
Example: In this case, the problem is divided into smaller problems. The smaller problems are solved independently. Finally, the result is merged to get the final result.
Is the problem decomposable
2. Can solution steps be ignored or undone?
In the Theorem Proving problem, a lemma that has been proved can be ignored for the next steps.
Such problems are called Ignorable problems.
In the 8-Puzzle, Moves can be undone and backtracked.
Such problems are called Recoverable problems.
In Playing Chess, moves can be retracted.
Such problems are called Irrecoverable problems.
Ignorable problems can be solved using a simple control structure that never backtracks. Recoverable problems can be solved using backtracking. Irrecoverable problems can be solved by recoverable style methods via planning.
3. Is the universe of the problem is predictable?
In Playing Bridge, We cannot know exactly where all the cards are or what the other players will do on their turns.
Uncertain outcome!
For certain-outcome problems, planning can be used to generate a sequence of operators that is guaranteed to lead to a solution.
For uncertain-outcome problems, a sequence of generated operators can only have a good probability of leading to a solution. Plan revision is made as the plan is carried out and the necessary feedback is provided.
4. Is a good solution to the problem is absolute or relative?
The Travelling Salesman Problem, we have to try all paths to find the shortest one.
See also Generate and Test Heuristic Search - Artificial Intelligence
Any path problem can be solved using heuristics that suggest good paths to explore.
For best-path problems, a much more exhaustive search will be performed.
5. Is the solution to the problem a state or a path
The Water Jug Problem, the path that leads to the goal must be reported.
The document discusses various kinematic problems in robotics including forward kinematics, inverse kinematics, and Denavit-Hartenberg notation. Forward kinematics determines the pose of the end-effector given the joint angles, while inverse kinematics determines the required joint angles to achieve a desired end-effector pose. Denavit-Hartenberg notation provides a systematic way to describe the geometry of robot links and joints. The document also covers numerical solutions to inverse kinematics for robots without closed-form solutions and kinematics of special robot types like under-actuated and redundant manipulators.
From Image Processing To Computer VisionJoud Khattab
This document provides an overview of digital image processing and computer vision. It defines digital images and describes different image types including binary, grayscale, and color images. The document outlines common digital image processing steps such as acquisition, enhancement, restoration, compression, segmentation, representation and description. It also discusses applications of computer vision such as scene completion, object detection and recognition tasks. In summary, the document serves as an introduction to digital image processing and computer vision concepts.
This document provides an overview of humanoid robots and ASIMO, an advanced humanoid robot created by Honda. It defines a humanoid robot as one that resembles humans in appearance and behavior, with a head, legs, arms and hands. Humanoid robots are developed to work in human environments without needing adaptation. ASIMO is introduced as a 120cm tall, 43kg human-like robot that can walk, climb stairs, make decisions, and use common sense. The latest version of ASIMO can understand human gestures and movements by following people and recognizing faces. It is considered intelligent because it can understand, learn, solve problems, make its own decisions, and adapt to new environments. Potential social issues around human
This document describes a project that uses SLAM (Simultaneous Localization and Mapping) to create maps of an unknown environment using a Turtlebot mobile robot equipped with a Kinect sensor. The GMapping ROS package is used to implement SLAM and build occupancy grid maps while simultaneously localizing the robot. Issues arose from using the Kinect's RGB sensor instead of a laser scanner for mapping. The resulting maps showed the robot's trajectory and obstacles but were less precise than if a laser scanner was used. Autonomous navigation on the created maps demonstrated the basic SLAM approach.
Sergii Kharagorgiiev, Chief Computer Vision Engineer at Starship Technologies...Codiax
Starship is developing self-driving delivery robots to address the growing demand for last mile deliveries of packages, food, and other goods. The company's robots use computer vision and machine learning algorithms running on multiple cameras and other sensors to navigate autonomously on sidewalks and avoid obstacles. Starship has driven over 100,000 km in pilots and operates commercial services in several cities. The company prioritizes safety and has achieved over 100,000 km of autonomous driving without any third-party property damage or injuries.
Reinforcement learning is a machine learning technique where an agent learns how to behave in an environment by receiving rewards or punishments for its actions. The goal of the agent is to learn an optimal policy that maximizes long-term rewards. Reinforcement learning can be applied to problems like game playing, robot control, scheduling, and economic modeling. The reinforcement learning process involves an agent interacting with an environment to learn through trial-and-error using state, action, reward, and policy. Common algorithms include Q-learning which uses a Q-table to learn the optimal action-selection policy.
The document discusses artificial intelligence (AI) and provides definitions, techniques, branches, and applications of AI. It defines AI as creating intelligent machines, especially computer programs, that can think like humans. It discusses representing knowledge to solve problems as an AI technique. Some branches of AI mentioned are logical AI, search, pattern recognition, representation, inference, common sense reasoning, learning from experience, planning, and applications in fields like robotics, natural language processing, and game playing.
Computer vision is a field of artificial intelligence that uses algorithms to allow computers to identify and process objects in images and videos similarly to how humans do. The goal is for computers to have human-like visual perception abilities or even surpass humans in certain ways. Computer vision works by training models on large labeled image datasets to detect patterns related to the labels. It has applications in self-driving cars, facial recognition, augmented reality, healthcare, and object detection. There are many job opportunities in computer vision engineering, research, development and science with salaries typically ranging from $68,000 to $136,000.
The advent of Mobile Robotics changed the definition of robotics and brought in some very interesting technologies paving the way for cutting edge sciences like AI, Behaviour Based Systems, etc
The document discusses state space search problems and techniques for solving them. It defines state space search as a process of considering successive configurations or states of a problem instance to find a goal state. Various search techniques like breadth-first search, depth-first search, and heuristic search are described. It also discusses problem characteristics that help determine the most appropriate search method, such as whether a problem can be decomposed or solution steps ignored/undone. Examples of search problems like the 8-puzzle, chess, and water jug problems are provided to illustrate state space formulation and solutions.
This program writes a C code to shear a cuboid. It includes graphics header files and uses the Bresenham's line algorithm to draw lines. The program defines a function called 'bress' to draw lines using the Bresenham algorithm. It takes coordinates of two points as input and uses conditions on the slope to determine the increment, endpoint and direction of line drawing. This function is used to draw the individual lines of the cuboid before and after shearing.
This document provides instructions and outlines the syllabus for the laboratory course CS8383 – Object Oriented Programming Laboratory. It includes 12 experiments covering topics like generating bills, currency conversion, inheritance with employee classes, interfaces, string operations with ArrayList, abstract classes, exception handling, file processing, multithreading, generics, and event-driven programming for a calculator application. The course aims to help students develop and implement Java programs involving classes, packages, interfaces, and concepts like exception handling, file processing, and event handling.
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of "intelligent agents".
Robotics is the interdisciplinary branch of engineering and science that includes mechanical engineering, electrical engineering, computer science, and others. Robotics deals with the design, construction, operation, and use of robots,[1] as well as computer systems for their control, sensory feedback, and information processing.
This document describes how to program simple AI for a Tic-Tac-Toe game by building a game tree to represent all possible game configurations and selecting the move that yields the best outcome. It explains that the game tree will have MAX and MIN nodes to represent whose turn it is, and children nodes for each possible move. The algorithm searches the tree to a depth of 2, then the root node selects the move with the highest evaluation value to determine the computer's best move.
This document discusses several search strategies including uninformed search, breadth-first search, depth-first search, uniform cost search, iterative deepening search, and bi-directional search. It provides algorithms and examples to explain how each strategy works. Key points include: breadth-first search visits nodes by level of depth; depth-first search generates nodes along the largest depth first before moving up; uniform cost search expands the lowest cost node; and iterative deepening search avoids infinite depth by searching each level iteratively and increasing the depth limit.
The document provides an overview of computer vision including:
- It defines computer vision as using observed image data to infer something about the world.
- It briefly discusses the history of computer vision from early projects in 1966 to David Marr establishing the foundations of modern computer vision in the 1970s.
- It lists several related fields that computer vision draws from including artificial intelligence, information engineering, neurobiology, solid-state physics, and signal processing.
- It provides examples of applications of computer vision such as self-driving vehicles, facial recognition, augmented reality, and uses in smartphones, the web, VR/AR, medical imaging, and insurance.
This document summarizes various search algorithms and toy problems that are used to illustrate problem solving by searching. It begins by introducing problem solving as finding a sequence of actions to achieve a goal from an initial state. It then discusses uninformed search strategies like breadth-first, depth-first, and uniform cost search. Several toy problems are presented, including the 8-puzzle, vacuum world, missionaries and cannibals problem. Real-world problems involving route finding, VLSI layout, and robot navigation are also briefly described. Evaluation criteria for search algorithms like space/time complexity and optimality/completeness are covered. Finally, iterative deepening search is introduced as a way to overcome depth limitations.
This document discusses greedy algorithms and divide and conquer algorithms. It provides examples of problems that can be solved using each approach and outlines the general solutions. For greedy algorithms, it explains that they make locally optimal choices at each step without considering future possibilities. For divide and conquer, it explains the three main steps of dividing the problem into subproblems, solving the subproblems recursively, and merging the solutions. It also provides an example problem of finding the number of inversions in an array by modifying the merge sort algorithm.
The document discusses various path planning techniques for mobile robots to navigate between a starting point and destination while avoiding collisions. It describes methods like visibility graphs, roadmaps, cell decomposition, and potential fields. It also covers implementing techniques like breadth-first search on visibility graphs and optimizing robot trajectories using factors like travel time, distance and sensor information.
This document summarizes the n-queen problem, which involves placing N queens on an N x N chessboard so that no queen can attack any other. It describes the problem's inputs and tasks, provides examples of solutions for different board sizes, and outlines the backtracking algorithm commonly used to solve this problem. The backtracking approach guarantees a solution but can be slow, with complexity rising exponentially with problem size. It is a good benchmark for testing parallel computing systems due to its iterative nature.
Knowledge representation and reasoning (KR) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language
Problem Characteristics in Artificial Intelligence,
Unit -2 Problem Solving and Searching Techniques
o choose an appropriate method for a particular problem first we need to categorize the problem based on the following characteristics.
Is the problem decomposable into small sub-problems which are easy to solve?
Can solution steps be ignored or undone?
Is the universe of the problem is predictable?
Is a good solution to the problem is absolute or relative?
Is the solution to the problem a state or a path?
What is the role of knowledge in solving a problem using artificial intelligence?
Does the task of solving a problem require human interaction?
1. Is the problem decomposable into small sub-problems which are easy to solve?
Can the problem be broken down into smaller problems to be solved independently?
See also Water Jug Problem in Artificial Intelligence
The decomposable problem can be solved easily.
Example: In this case, the problem is divided into smaller problems. The smaller problems are solved independently. Finally, the result is merged to get the final result.
Is the problem decomposable
2. Can solution steps be ignored or undone?
In the Theorem Proving problem, a lemma that has been proved can be ignored for the next steps.
Such problems are called Ignorable problems.
In the 8-Puzzle, Moves can be undone and backtracked.
Such problems are called Recoverable problems.
In Playing Chess, moves can be retracted.
Such problems are called Irrecoverable problems.
Ignorable problems can be solved using a simple control structure that never backtracks. Recoverable problems can be solved using backtracking. Irrecoverable problems can be solved by recoverable style methods via planning.
3. Is the universe of the problem is predictable?
In Playing Bridge, We cannot know exactly where all the cards are or what the other players will do on their turns.
Uncertain outcome!
For certain-outcome problems, planning can be used to generate a sequence of operators that is guaranteed to lead to a solution.
For uncertain-outcome problems, a sequence of generated operators can only have a good probability of leading to a solution. Plan revision is made as the plan is carried out and the necessary feedback is provided.
4. Is a good solution to the problem is absolute or relative?
The Travelling Salesman Problem, we have to try all paths to find the shortest one.
See also Generate and Test Heuristic Search - Artificial Intelligence
Any path problem can be solved using heuristics that suggest good paths to explore.
For best-path problems, a much more exhaustive search will be performed.
5. Is the solution to the problem a state or a path
The Water Jug Problem, the path that leads to the goal must be reported.
The document discusses various kinematic problems in robotics including forward kinematics, inverse kinematics, and Denavit-Hartenberg notation. Forward kinematics determines the pose of the end-effector given the joint angles, while inverse kinematics determines the required joint angles to achieve a desired end-effector pose. Denavit-Hartenberg notation provides a systematic way to describe the geometry of robot links and joints. The document also covers numerical solutions to inverse kinematics for robots without closed-form solutions and kinematics of special robot types like under-actuated and redundant manipulators.
From Image Processing To Computer VisionJoud Khattab
This document provides an overview of digital image processing and computer vision. It defines digital images and describes different image types including binary, grayscale, and color images. The document outlines common digital image processing steps such as acquisition, enhancement, restoration, compression, segmentation, representation and description. It also discusses applications of computer vision such as scene completion, object detection and recognition tasks. In summary, the document serves as an introduction to digital image processing and computer vision concepts.
This document provides an overview of humanoid robots and ASIMO, an advanced humanoid robot created by Honda. It defines a humanoid robot as one that resembles humans in appearance and behavior, with a head, legs, arms and hands. Humanoid robots are developed to work in human environments without needing adaptation. ASIMO is introduced as a 120cm tall, 43kg human-like robot that can walk, climb stairs, make decisions, and use common sense. The latest version of ASIMO can understand human gestures and movements by following people and recognizing faces. It is considered intelligent because it can understand, learn, solve problems, make its own decisions, and adapt to new environments. Potential social issues around human
This document describes a project that uses SLAM (Simultaneous Localization and Mapping) to create maps of an unknown environment using a Turtlebot mobile robot equipped with a Kinect sensor. The GMapping ROS package is used to implement SLAM and build occupancy grid maps while simultaneously localizing the robot. Issues arose from using the Kinect's RGB sensor instead of a laser scanner for mapping. The resulting maps showed the robot's trajectory and obstacles but were less precise than if a laser scanner was used. Autonomous navigation on the created maps demonstrated the basic SLAM approach.
Sergii Kharagorgiiev, Chief Computer Vision Engineer at Starship Technologies...Codiax
Starship is developing self-driving delivery robots to address the growing demand for last mile deliveries of packages, food, and other goods. The company's robots use computer vision and machine learning algorithms running on multiple cameras and other sensors to navigate autonomously on sidewalks and avoid obstacles. Starship has driven over 100,000 km in pilots and operates commercial services in several cities. The company prioritizes safety and has achieved over 100,000 km of autonomous driving without any third-party property damage or injuries.
Reinforcement learning is a machine learning technique where an agent learns how to behave in an environment by receiving rewards or punishments for its actions. The goal of the agent is to learn an optimal policy that maximizes long-term rewards. Reinforcement learning can be applied to problems like game playing, robot control, scheduling, and economic modeling. The reinforcement learning process involves an agent interacting with an environment to learn through trial-and-error using state, action, reward, and policy. Common algorithms include Q-learning which uses a Q-table to learn the optimal action-selection policy.
The document discusses artificial intelligence (AI) and provides definitions, techniques, branches, and applications of AI. It defines AI as creating intelligent machines, especially computer programs, that can think like humans. It discusses representing knowledge to solve problems as an AI technique. Some branches of AI mentioned are logical AI, search, pattern recognition, representation, inference, common sense reasoning, learning from experience, planning, and applications in fields like robotics, natural language processing, and game playing.
Computer vision is a field of artificial intelligence that uses algorithms to allow computers to identify and process objects in images and videos similarly to how humans do. The goal is for computers to have human-like visual perception abilities or even surpass humans in certain ways. Computer vision works by training models on large labeled image datasets to detect patterns related to the labels. It has applications in self-driving cars, facial recognition, augmented reality, healthcare, and object detection. There are many job opportunities in computer vision engineering, research, development and science with salaries typically ranging from $68,000 to $136,000.
The advent of Mobile Robotics changed the definition of robotics and brought in some very interesting technologies paving the way for cutting edge sciences like AI, Behaviour Based Systems, etc
The document discusses state space search problems and techniques for solving them. It defines state space search as a process of considering successive configurations or states of a problem instance to find a goal state. Various search techniques like breadth-first search, depth-first search, and heuristic search are described. It also discusses problem characteristics that help determine the most appropriate search method, such as whether a problem can be decomposed or solution steps ignored/undone. Examples of search problems like the 8-puzzle, chess, and water jug problems are provided to illustrate state space formulation and solutions.
This program writes a C code to shear a cuboid. It includes graphics header files and uses the Bresenham's line algorithm to draw lines. The program defines a function called 'bress' to draw lines using the Bresenham algorithm. It takes coordinates of two points as input and uses conditions on the slope to determine the increment, endpoint and direction of line drawing. This function is used to draw the individual lines of the cuboid before and after shearing.
This document provides instructions and outlines the syllabus for the laboratory course CS8383 – Object Oriented Programming Laboratory. It includes 12 experiments covering topics like generating bills, currency conversion, inheritance with employee classes, interfaces, string operations with ArrayList, abstract classes, exception handling, file processing, multithreading, generics, and event-driven programming for a calculator application. The course aims to help students develop and implement Java programs involving classes, packages, interfaces, and concepts like exception handling, file processing, and event handling.
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of "intelligent agents".
Robotics is the interdisciplinary branch of engineering and science that includes mechanical engineering, electrical engineering, computer science, and others. Robotics deals with the design, construction, operation, and use of robots,[1] as well as computer systems for their control, sensory feedback, and information processing.
This document describes how to program simple AI for a Tic-Tac-Toe game by building a game tree to represent all possible game configurations and selecting the move that yields the best outcome. It explains that the game tree will have MAX and MIN nodes to represent whose turn it is, and children nodes for each possible move. The algorithm searches the tree to a depth of 2, then the root node selects the move with the highest evaluation value to determine the computer's best move.
This document discusses several search strategies including uninformed search, breadth-first search, depth-first search, uniform cost search, iterative deepening search, and bi-directional search. It provides algorithms and examples to explain how each strategy works. Key points include: breadth-first search visits nodes by level of depth; depth-first search generates nodes along the largest depth first before moving up; uniform cost search expands the lowest cost node; and iterative deepening search avoids infinite depth by searching each level iteratively and increasing the depth limit.
The document provides an overview of computer vision including:
- It defines computer vision as using observed image data to infer something about the world.
- It briefly discusses the history of computer vision from early projects in 1966 to David Marr establishing the foundations of modern computer vision in the 1970s.
- It lists several related fields that computer vision draws from including artificial intelligence, information engineering, neurobiology, solid-state physics, and signal processing.
- It provides examples of applications of computer vision such as self-driving vehicles, facial recognition, augmented reality, and uses in smartphones, the web, VR/AR, medical imaging, and insurance.
This document summarizes various search algorithms and toy problems that are used to illustrate problem solving by searching. It begins by introducing problem solving as finding a sequence of actions to achieve a goal from an initial state. It then discusses uninformed search strategies like breadth-first, depth-first, and uniform cost search. Several toy problems are presented, including the 8-puzzle, vacuum world, missionaries and cannibals problem. Real-world problems involving route finding, VLSI layout, and robot navigation are also briefly described. Evaluation criteria for search algorithms like space/time complexity and optimality/completeness are covered. Finally, iterative deepening search is introduced as a way to overcome depth limitations.
This document discusses greedy algorithms and divide and conquer algorithms. It provides examples of problems that can be solved using each approach and outlines the general solutions. For greedy algorithms, it explains that they make locally optimal choices at each step without considering future possibilities. For divide and conquer, it explains the three main steps of dividing the problem into subproblems, solving the subproblems recursively, and merging the solutions. It also provides an example problem of finding the number of inversions in an array by modifying the merge sort algorithm.
This document discusses state-space representations for general problem solving. It provides examples of state-space models for various problems including the 8-queens puzzle, traveling salesman problem, sliding tile puzzle, cryptarithmetic, Boolean satisfiability, crossword puzzles, and finding common misspelled words in strings. The key components of a state-space model are the initial state, operators that change the state, and a goal test to determine when the problem is solved.
This document discusses search algorithms and problem solving through searching. It begins by defining search problems and representing them using graphs with states as nodes and actions as edges. It then covers uninformed search strategies like breadth-first and depth-first search. Informed search strategies use heuristics to guide the search toward more promising areas of the problem space. Examples of single agent pathfinding problems are given like the traveling salesman problem and Rubik's cube.
The document discusses uninformed search techniques. It provides examples of representing problems as states and operators that transform states. This includes problems like the water jug problem, 8-puzzle, and 8-queens. It then describes common uninformed search algorithms like breadth-first search, depth-first search, iterative deepening, and uniform cost search. It analyzes the properties of these algorithms like completeness, time complexity, space complexity, and optimality.
The document discusses the characteristics of algorithms and the concept of mathematical expectation in average case analysis. It then provides the pseudocode for the MaxMin algorithm and discusses the greedy knapsack algorithm and the travelling salesman problem. Finally, it explains the sum of subsets problem, describing two formulations and how the solution space can be organized into trees.
Lecture is related to the topic of Artificial intelligencemohsinwaseer1
The document discusses different types of problem-solving agents, including reflex agents which directly map states to actions, and goal-based agents which solve problems by searching for sequences of actions that lead to desirable goal states. It provides examples of well-defined problems like the vacuum world and 8-puzzle that involve specifying an initial state, possible actions, transition models, a goal test, and a path cost function. The document also discusses how real-world problems like route planning and airline travel can be modeled as search problems by defining states, actions, transitions between states, and optimal solutions.
The document provides information about topics to be covered in class today including warm-ups, make-up tests, reviews of radical equations, Pythagorean theorem, distance and midpoint formulas, and STAR testing. It then reviews solving radical equations, using the Pythagorean theorem, and applying the distance and midpoint formulas to different types of problems. Examples are provided for using the distance and midpoint formulas to find missing coordinates given certain known values.
Constraint satisfaction problems (CSPs) involve assigning values to variables from given domains so that all constraints are satisfied. CSPs provide a general framework that can model many combinatorial problems. A CSP is defined by variables that take values from domains, and constraints specifying allowed value combinations. Real-world CSPs include scheduling, assignment problems, timetabling, mapping coloring and puzzles. Examples provided include cryptarithmetic, Sudoku, 4-queens, and graph coloring.
This document describes sets and operations on sets related to numbers on a roulette wheel. It defines six sets - A (red numbers), B (black numbers), C (green numbers), D (even numbers), E (odd numbers), and F (numbers 1-12). It provides the elements of each set based on a standard American roulette wheel. It then calculates the unions and intersections of these sets according to the given operations. Tables and diagrams are provided to represent the set operations and relationships.
This document discusses algorithms for clipping circles and curves to a bounding region. It describes a fast circle clipping algorithm that uses an accept/reject test to determine whether points are inside or outside the clipping region. It also discusses a midpoint circle algorithm that uses incremental steps to scan convert circles. Finally, it explains that curved objects can be clipped by first testing if their bounding rectangles overlap the clipping region before solving nonlinear equations to find curve-window intersection points.
The document discusses various search algorithms used in artificial intelligence problem solving. It defines key search terminology like problem space, states, actions, and goals. It then explains different types of search problems and provides examples like the 8-puzzle and vacuuming world problems. Finally, it summarizes uninformed search strategies like breadth-first search, depth-first search, and iterative deepening search as well as informed strategies like greedy best-first search and A* search which use heuristics to guide the search.
Reinforcement learning is a computational approach for learning through interaction without an explicit teacher. An agent takes actions in various states and receives rewards, allowing it to learn relationships between situations and optimal actions. The goal is to learn a policy that maximizes long-term rewards by balancing exploitation of current knowledge with exploration of new actions. Methods like Q-learning use value function approximation and experience replay in deep neural networks to scale to complex problems with large state spaces like video games. Temporal difference learning combines the advantages of Monte Carlo and dynamic programming by bootstrapping values from current estimates rather than waiting for full episodes.
The document discusses various backtracking algorithms and problems. It begins with an overview of backtracking as a general algorithm design technique for problems that involve traversing decision trees and exploring partial solutions. It then provides examples of specific problems that can be solved using backtracking, including the N-Queens problem, map coloring problem, and Hamiltonian circuits problem. It also discusses common terminology and concepts in backtracking algorithms like state space trees, pruning nonpromising nodes, and backtracking when partial solutions are determined to not lead to complete solutions.
Unit-1 Basic Concept of Algorithm.pptxssuser01e301
The document discusses various topics related to algorithms including algorithm design, real-life applications, analysis, and implementation. It specifically covers four algorithms - the taxi algorithm, rent-a-car algorithm, call-me algorithm, and bus algorithm - for getting from an airport to a house. It also provides examples of simple multiplication methods like the American, English, and Russian approaches as well as the divide and conquer method.
This presentation is the full application of discrete mathematics throughout a course and includes Set Theory, Functions nd Sequences, Automata Theory, Grammars and algorithm building.
This document discusses dynamic programming and algorithms for solving all-pair shortest path problems. It begins by defining dynamic programming as avoiding recalculating solutions by storing results in a table. It then describes Floyd's algorithm for finding shortest paths between all pairs of nodes in a graph. The algorithm iterates through nodes, calculating shortest paths that pass through each intermediate node. It takes O(n3) time for a graph with n nodes. Finally, it discusses the multistage graph problem and provides forward and backward algorithms to find the minimum cost path from source to destination in a multistage graph in O(V+E) time, where V and E are the numbers of vertices and edges.
This document provides instructions for calculating and interpreting Spearman's rank correlation coefficient. It begins with an example comparing pedestrian counts and convenience shops in 12 town zones. Tables are constructed to rank the data and calculate differences between ranks. The equation for Spearman's rank is shown and applied to the example data, yielding a value of 0.888. This indicates a fairly positive relationship between pedestrian counts and shops. Critical values tables are presented to determine statistical significance based on the sample size. In this case, the value exceeds thresholds for 95% and 99% confidence, showing a highly significant relationship.
The document discusses problem solving through search. It defines intelligent agents, search problems, and search graphs. Search problems are formulated using states, operators, start states, and goal states. Several search algorithms are introduced, including depth-first search and breadth-first search. Examples of search problems discussed include finding a route from Arad to Bucharest in Romania, the vacuum world problem, the 8-queens problem, and the 8-puzzle problem. The document outlines how to represent these problems as state spaces and formulates them in terms of states, actions, initial states, and goal tests. It also introduces tree search algorithms and strategies for searching state spaces, such as uninformed blind search and informed heuristic search.
The document provides an introduction to unsupervised learning and reinforcement learning. It then discusses eigen values and eigen vectors, showing how to calculate them from a matrix. It provides examples of covariance matrices and using Gaussian elimination to solve for eigen vectors. Finally, it discusses principal component analysis and different clustering algorithms like K-means clustering.
Cross validation is a technique for evaluating machine learning models by splitting the dataset into training and validation sets and training the model multiple times on different splits, to reduce variance. K-fold cross validation splits the data into k equally sized folds, where each fold is used once for validation while the remaining k-1 folds are used for training. Leave-one-out cross validation uses a single observation from the dataset as the validation set. Stratified k-fold cross validation ensures each fold has the same class proportions as the full dataset. Grid search evaluates all combinations of hyperparameters specified as a grid, while randomized search samples hyperparameters randomly within specified ranges. Learning curves show training and validation performance as a function of training set size and can diagnose underfitting
This document provides an overview of supervised machine learning algorithms for classification, including logistic regression, k-nearest neighbors (KNN), support vector machines (SVM), and decision trees. It discusses key concepts like evaluation metrics, performance measures, and use cases. For logistic regression, it covers the mathematics behind maximum likelihood estimation and gradient descent. For KNN, it explains the algorithm and discusses distance metrics and a numerical example. For SVM, it outlines the concept of finding the optimal hyperplane that maximizes the margin between classes.
The document provides information on solving the sum of subsets problem using backtracking. It discusses two formulations - one where solutions are represented by tuples indicating which numbers are included, and another where each position indicates if the corresponding number is included or not. It shows the state space tree that represents all possible solutions for each formulation. The tree is traversed depth-first to find all solutions where the sum of the included numbers equals the target sum. Pruning techniques are used to avoid exploring non-promising paths.
The document discusses the greedy method and its applications. It begins by defining the greedy approach for optimization problems, noting that greedy algorithms make locally optimal choices at each step in hopes of finding a global optimum. Some applications of the greedy method include the knapsack problem, minimum spanning trees using Kruskal's and Prim's algorithms, job sequencing with deadlines, and finding the shortest path using Dijkstra's algorithm. The document then focuses on explaining the fractional knapsack problem and providing a step-by-step example of solving it using a greedy approach. It also provides examples and explanations of Kruskal's algorithm for finding minimum spanning trees.
The document describes various divide and conquer algorithms including binary search, merge sort, quicksort, and finding maximum and minimum elements. It begins by explaining the general divide and conquer approach of dividing a problem into smaller subproblems, solving the subproblems independently, and combining the solutions. Several examples are then provided with pseudocode and analysis of their divide and conquer implementations. Key algorithms covered in the document include binary search (log n time), merge sort (n log n time), and quicksort (n log n time on average).
What is an Algorithm
Time Complexity
Space Complexity
Asymptotic Notations
Recursive Analysis
Selection Sort
Insertion Sort
Recurrences
Substitution Method
Master Tree Method
Recursion Tree Method
This document provides an outline for a machine learning syllabus. It includes 14 modules covering topics like machine learning terminology, supervised and unsupervised learning algorithms, optimization techniques, and projects. It lists software and hardware requirements for the course. It also discusses machine learning applications, issues, and the steps to build a machine learning model.
The simplex method is a linear programming algorithm that can solve problems with more than two decision variables. It works by generating a series of solutions, called tableaus, where each tableau corresponds to a corner point of the feasible solution space. The algorithm starts at the initial tableau, which corresponds to the origin. It then shifts to adjacent corner points, moving in the direction that optimizes the objective function. This process of generating new tableaus continues until an optimal solution is found.
The document discusses functions and the pigeonhole principle. It defines what a function is, how functions can be represented graphically and with tables and ordered pairs. It covers one-to-one, onto, and bijective functions. It also discusses function composition, inverse functions, and the identity function. The pigeonhole principle states that if n objects are put into m containers where n > m, then at least one container must hold more than one object. Examples are given to illustrate how to apply the principle to problems involving months, socks, and selecting numbers.
The document discusses relations and their representations. It defines a binary relation as a subset of A×B where A and B are nonempty sets. Relations can be represented using arrow diagrams, directed graphs, and zero-one matrices. A directed graph represents the elements of A as vertices and draws an edge from vertex a to b if aRb. The zero-one matrix representation assigns 1 to the entry in row a and column b if (a,b) is in the relation, and 0 otherwise. The document also discusses indegrees, outdegrees, composite relations, and properties of relations like reflexivity.
This document discusses logic and propositional logic. It covers the following topics:
- The history and applications of logic.
- Different types of statements and their grammar.
- Propositional logic including symbols, connectives, truth tables, and semantics.
- Quantifiers, universal and existential quantification, and properties of quantifiers.
- Normal forms such as disjunctive normal form and conjunctive normal form.
- Inference rules and the principle of mathematical induction, illustrated with examples.
1. Set theory is an important mathematical concept and tool that is used in many areas including programming, real-world applications, and computer science problems.
2. The document introduces some basic concepts of set theory including sets, members, operations on sets like union and intersection, and relationships between sets like subsets and complements.
3. Infinite sets are discussed as well as different types of infinite sets including countably infinite and uncountably infinite sets. Special sets like the empty set and power sets are also covered.
The document discusses uncertainty and probabilistic reasoning. It describes sources of uncertainty like partial information, unreliable information, and conflicting information from multiple sources. It then discusses representing and reasoning with uncertainty using techniques like default logic, rules with probabilities, and probability theory. The key approaches covered are conditional probability, independence, conditional independence, and using Bayes' rule to update probabilities based on new evidence.
The document outlines the objectives, outcomes, and learning outcomes of a course on artificial intelligence. The objectives include conceptualizing ideas and techniques for intelligent systems, understanding mechanisms of intelligent thought and action, and understanding advanced representation and search techniques. Outcomes include developing an understanding of AI building blocks, choosing appropriate problem solving methods, analyzing strengths and weaknesses of AI approaches, and designing models for reasoning with uncertainty. Learning outcomes include knowledge, intellectual skills, practical skills, and transferable skills in artificial intelligence.
Planning involves representing an initial state, possible actions, and a goal state. A planning agent uses a knowledge base to select action sequences that transform the initial state into a goal state. STRIPS is a common planning representation that uses predicates to describe states and logical operators to represent actions and their effects. A STRIPS planning problem specifies the initial state, goal conditions, and set of operators. A solution is a sequence of ground operator instances that produces the goal state from the initial state.
Better Builder Magazine brings together premium product manufactures and leading builders to create better differentiated homes and buildings that use less energy, save water and reduce our impact on the environment. The magazine is published four times a year.
This is an overview of my current metallic design and engineering knowledge base built up over my professional career and two MSc degrees : - MSc in Advanced Manufacturing Technology University of Portsmouth graduated 1st May 1998, and MSc in Aircraft Engineering Cranfield University graduated 8th June 2007.
An In-Depth Exploration of Natural Language Processing: Evolution, Applicatio...DharmaBanothu
Natural language processing (NLP) has
recently garnered significant interest for the
computational representation and analysis of human
language. Its applications span multiple domains such
as machine translation, email spam detection,
information extraction, summarization, healthcare,
and question answering. This paper first delineates
four phases by examining various levels of NLP and
components of Natural Language Generation,
followed by a review of the history and progression of
NLP. Subsequently, we delve into the current state of
the art by presenting diverse NLP applications,
contemporary trends, and challenges. Finally, we
discuss some available datasets, models, and
evaluation metrics in NLP.
Sri Guru Hargobind Ji - Bandi Chor Guru.pdfBalvir Singh
Sri Guru Hargobind Ji (19 June 1595 - 3 March 1644) is revered as the Sixth Nanak.
• On 25 May 1606 Guru Arjan nominated his son Sri Hargobind Ji as his successor. Shortly
afterwards, Guru Arjan was arrested, tortured and killed by order of the Mogul Emperor
Jahangir.
• Guru Hargobind's succession ceremony took place on 24 June 1606. He was barely
eleven years old when he became 6th Guru.
• As ordered by Guru Arjan Dev Ji, he put on two swords, one indicated his spiritual
authority (PIRI) and the other, his temporal authority (MIRI). He thus for the first time
initiated military tradition in the Sikh faith to resist religious persecution, protect
people’s freedom and independence to practice religion by choice. He transformed
Sikhs to be Saints and Soldier.
• He had a long tenure as Guru, lasting 37 years, 9 months and 3 days
We have designed & manufacture the Lubi Valves LBF series type of Butterfly Valves for General Utility Water applications as well as for HVAC applications.
3. shiwani gupta 3
Agents
• An agent is an entity that can be viewed as perceiving its environment
through sensors and acting upon that environment through
effectors/actuators to achieve goals
• Ideal, rational, human, robotic, software, autonomous
• Simple reflex, Model Based, Goal based, Utility based
• The Simple Reflex Agent will work only if the correct decision can
be made on the basis of the current percept.
4. SHIWANI GUPTA 4
Drawback of Simple reflex agents (Module2)
• unable to plan ahead
• limited in what they can do because their actions are determined only
by the current percept
• have no knowledge of what their actions do nor of what they are
trying to achieve.
Solution:
one kind of goal-based agent… problem-solving agent.
(Problem-solving agents decide what to do by finding sequences of
actions that lead to desirable states)
• Goal formulation based on current situation
• Problem Formulation is the process of deciding what actions and
states to consider
• Search is the process of looking for different possible sequences of
actions leading to goal state
• Execution is carrying out actions recommended as solution by search
algorithm
A goal and a set of means for achieving the goal is called a problem, and
the process of exploring what the means can do is called search
8. 8
Toy Problems vs.
Real-World Problems
Toy Problems
– concise and exact
description
– used for illustration
purposes
– used for performance
comparisons
• Simple to describe
• Trivial to solve
Real-World Problems
– no single, agreed-upon
description
– people care about the
solutions
• Hours to define
9. SHIWANI GUPTA 9
Toy problems
– Touring in Romania
– 8-Puzzle/Sliding Block Puzzle
– Robotic Assembly
– 8 Queen
– Missionaries and Cannibals
– Cryptarithmetic
– Water Jug Problem
10. 10
Real-World Problem:
Touring in Romania
Oradea
Bucharest
Fagaras
Pitesti
Neamt
Iasi
Vaslui
Urziceni
Hirsova
Eforie
Giurgiu
Craiova
Rimnicu Vilcea
Sibiu
Dobreta
Mehadia
Lugoj
Timisoara
Arad
Zerind
120
140
151
75
70
111
118
75
71
85
90
211
101
97
138
146
80
99
87
92
142
98
86
Aim: find a course of action that satisfies a number of specified conditions
11. 13
Touring in Romania:
Search Problem Definition
• initial state:
– In(Arad)
• possible Actions:
– DriveTo(Zerind), DriveTo(Sibiu), DriveTo(Timisoara), etc.
• goal state:
– In(Bucharest)
• step cost:
– distances between cities
15. SHIWANI GUPTA 17
Example: robotic assembly
• states?: real-valued coordinates of robot joint angles; parts of the
object to be assembled
• actions?: continuous motions of robot joints
• goal test?: complete assembly
• path cost?: time to execute
16. SHIWANI GUPTA 18
Example: 8 queens problem
The incremental formulation involves placing queens one by one,
whereas
the complete-state formulation starts with all 8 queens on the board and
moves them around.
In either case, the path cost is of no interest because only the final state
counts; algorithms are thus compared only on search cost.
Goal test: 8 queens on board, none attacked.
Path cost: zero.
States: any arrangement of 0 to 8 queens on board.
Operators: add a queen to any square.
17. SHIWANI GUPTA 19
Incremental formulation
The fact that placing a queen where it is already attacked cannot work,
because subsequent placing of other queens will not undo the attack.
So we might try the following:
States: arrangements of 0 to 8 queens with none attacked
Operators: place a queen in the left-most empty column such that it is not
attacked by any other queen
It is easy to see that the actions given
can generate only states with no attacks;
but sometimes no actions will be possible.
For example, after making the first seven
choices (left-to-right)
A quick calculation shows that there are only
Much fewer sequences to investigate.
The right formulation makes a big
difference to the size of the search space.
18. SHIWANI GUPTA 20
Complete formulation
• States: arrangements of 8 queens, one in each column.
• Operators: move any attacked queen to another square in the
same column.
19. SHIWANI GUPTA 21
Example: Missionaries and cannibals problem
(Three missionaries and three cannibals are on one side of a river, along with a boat that can hold
max two people. Find a way to get everyone to the other side, without ever leaving a group of
missionaries in one place outnumbered by the cannibals in that place)
States: a state consists of an ordered sequence of three numbers representing
the number of missionaries, cannibals, and boats on the bank of the river
from which they started. Thus, the start state is (3,3,1)
Operators: from each state the possible operators are to take either one
missionary, one cannibal, two missionaries, two cannibals, or one of each
across in the boat. Thus, there are at most five operators, although most
states have fewer because it is necessary to avoid illegal states. Note that if
we had chosen to distinguish between individual people then there would be
27 operators instead of just 5
Goal test: reached state (0,0,1)
Path cost: number of crossings
20. 22
Missionaries and Cannibals:
Successor Function
state set of <action, state>
(L:3m,3c,b-R:0m,0c) → {<2c, (L:3m,1c-R:0m,2c,b)>,
<1m1c, (L:2m,2c-R:1m,1c,b)>,
<1c, (L:3m,2c-R:0m,1c,b)>}
(L:3m,1c-R:0m,2c,b) → {<2c, (L:3m,3c,b-R:0m,0c) >,
<1c, (L:3m,2c,b-R:0m,1c)>}
(L:2m,2c-R:1m,1c,b) → {<1m1c, (L:3m,3c,b-R:0m,0c) >,
<1m, (L:3m,2c,b-R:0m,1c)>}
22. SHIWANI GUPTA 24
LEFT BANK RIGHT BANK
0 Initial setup: MMMCCC B -
1 Two cannibals cross over: MMMC B CC
2 One comes back: MMMCC B C
3 Two cannibals go over again: MMM B CCC
4 One comes back: MMMC B CC
5 Two missionaries cross: MC B MMCC
6 A missionary & cannibal return: MMCC B MC
7 Two missionaries cross again: CC B MMMC
8 A cannibal returns: CCC B MMM
9 Two cannibals cross: C B MMMCC
10 One returns: CC B MMMC
11 And brings over the third: - B MMMCCC
23. 25
Missionaries and Cannibals: Goal
State and Path Cost
• goal state:
– all missionaries, all
cannibals, and the
boat are on the right
bank.
• path cost
– step cost: 1 for each
crossing
– path cost: number of
crossings = length of path
• solution path:
– 4 optimal solutions
– cost: 11
24. SHIWANI GUPTA 30
Example: Cryptarithmetic
States: a cryptarithmetic puzzle with some letters replaced by digits.
Operators: replace all occurrences of a letter with a digit not already
appearing in the puzzle.
Goal test: puzzle contains only digits, and represents a correct sum.
Path cost: zero. All solutions equally valid.
One way to do this is to adopt a fixed order, e.g., alphabetical order.
A better choice is to do whichever is the most constrained substitution,
that is, the letter that has the fewest legal possibilities given the
constraints of the puzzle.
25. Crypt-Arithmetic puzzle
• Problem Statement:
– Solve the following puzzle by assigning numeral (0-9) in such a way that
each letter is assigned unique digit which satisfy the following addition.
– Constraints : No two letters have the same value. (The constraints of
arithmetic).
• Initial Problem State
– S = ? ; E = ? ;N = ? ; D = ? ; M = ? ;O = ? ; R = ? ;Y = ?
S E N D
+ M O R E
_________________________________
M O N E Y
_________________________________
26. Carries:
C4 = ? ;C3 = ? ;C2 = ? ;C1 = ?
Constraint equations:
Y = D + E C1
E = N + R + C1 C2
N = E + O + C2 C3
O = S + M + C3 C4
M = C4
C4 C3 C2 C1 Carry
S E N D
+ M O R E
_________________________________
M O N E Y
_________________________________
27. • We can easily see that M has to be non zero
digit, so the value of C4 =1
1. M = C4 M = 1
2. O = S + M + C3 → C4
For C4 =1, S + M + C3 > 9
S + 1 + C3 > 9 S+C3 > 8.
If C3 = 0, then S = 9 else if C3 = 1,
then S = 8 or 9.
• We see that for S = 9
– C3 = 0 or 1
– It can be easily seen that C3 = 1 is not
possible as O = S + M + C3 O = 11
O has to be assigned digit 1 but 1 is
already assigned to M, so not possible.
– Therefore, only choice for C3 = 0, and
thus O = 10. This implies that O is
assigned 0 (zero) digit.
• Therefore, O = 0
M = 1, O = 0
C4 C3 C2 C1 Carry
S E N D
+ M O R E
_________________________________
M O N E Y
_________________________________
Y = D + E → C1
E = N + R + C1 → C2
N = E + O + C2 → C3
O = S + M + C3 → C4
M = C4
28. 3. Since C3 = 0; N = E + O + C2 produces
no carry.
• As O = 0, N = E + C2 .
• Since N E, therefore, C2 = 1.
Hence N = E + 1
• Now E can take value from 2 to 8 {0,1,9
already assigned so far }
– If E = 2, then N = 3.
– Since C2 = 1, from E = N + R + C1 ,
we get 12 = N + R + C1
• If C1 = 0 then R = 9, which is not
possible as we are on the path with S
= 9
• If C1 = 1 then R = 8, then
» From Y = D + E , we get 10
+ Y= D + 2 .
» For no value of D, we can get
Y.
– Try similarly for E = 3, 4. We fail in
each case.
C4 C3 C2 C1 Carry
S E N D
+ M O R E
_________________________________
M O N E Y
_________________________________
Y = D + E → C1
E = N + R + C1 → C2
N = E + O + C2 → C3
O = S + M + C3 → C4
M = C4
29. • If E = 5, then N = 6
– Since C2 = 1, from E = N + R + C1 ,
we get 15 = N + R + C1 ,
– If C1 = 0 then R = 9, which is not
possible as we are on the path with
S = 9.
– If C1 = 1 then R = 8, then
• From Y = D + E , we get 10 +
Y= D + 5 i.e., 5 + Y = D.
• If Y = 2 then D = 7. These
values are possible.
• Hence we get the final solution as
given below and on backtracking, we
may find more solutions.
S = 9 ; E = 5 ; N = 6 ; D = 7 ;
M = 1 ; O = 0 ; R = 8 ;Y = 2
C4 C3 C2 C1 Carry
S E N D
+ M O R E
_________________________________
M O N E Y
_________________________________
Y = D + E → C1
E = N + R + C1 → C2
N = E + O + C2 → C3
O = S + M + C3 → C4
M = C4
30. Constraints:
Y = D + E C1
E = N + R + C1 C2
N = E + O + C2 C3
O = S + M + C3 C4
M = C4
Initial State
M = 1 → C4 = 1
O = 1 + S + C3
O
S = 9 S = 8
O
C3 = 0 C3 = 1
O = 0 O = 1
Fixed
M = 1
O = 0
N = E + O + C2 = E + C2 → C2 = 1 (must) → N = E + 1
E = 2 E = 3 ….. E = 5
N = 3 N = 6
E = N + R + C1 E = N + R + C1
10 + 2 = 3 + R + C1 10 + 5 = 6 + R + C1
O O
R = 9 R = 8 R = 9 R = 8
C1 =0 C1 = 1 C1 = 0 C1 = 1
O O
10 + Y = D + E = D + 2 10 + Y = D + E = D + 5
O O
D = 8 D = 9 D = 7
Y = 0 Y = 1 Y = 2
The first solution obtained is:
M = 1, O = 0, S = 9, E = 5, N = 6, R = 8, D = 7, Y = 2
31. C4 C3 C2 C1 Carries
B A S E
+ B A L L
_________________________________
G A M E S
_________________________________
Constraints equations are:
E + L = S → C1
S + L + C1= E → C2
2A + C2 = M → C3
2B + C3 = A → C4
G = C4
Initial Problem State
G = ?; A = ?;M = ?; E = ?; S = ?; B = ?; L = ?
32. 1. G = C4 G = 1
2. 2B+ C3 = A → C4
2.1 Since C4 = 1, therefore, 2B+ C3 > 9 B can take values from 5 to 9.
2.2 Try the following steps for each value of B from 5 to 9 till we get a
possible value of B.
if C3 = 0 A = 0 M = 0 for C2 = 0 or M = 1 for C2 = 1
• If B = 5
if C3 = 1 A = 1 (as G = 1 already)
• For B = 6 we get similar contradiction while generating the search tree.
• If B = 7 , then for C3 = 0, we get A = 4 M = 8 if C2 = 0 that leads to
contradiction, so this path is pruned. If C2 = 1, then M = 9 .
3. Let us solve S + L + C1 = E and E + L = S
• Using both equations, we get 2L + C1 = 0 L = 5 and C1 = 0
• Using L = 5, we get S + 5 = E that should generate carry C2 = 1 as shown
above
• So S+5 > 9 Possible values for E are {2, 3, 6, 8} (with carry bit C2 = 1 )
• If E = 2 then S + 5 = 12 S = 7 (as B = 7 already)
• If E = 3 then S + 5 = 13 S = 8.
• Therefore E = 3 and S = 8 are fixed up.
4. Hence we get the final solution as given below and on backtracking, we may find
more solutions. In this case we get only one solution.
G = 1; A = 4; M = 9; E = 3; S = 8;B = 7; L = 5
33. SHIWANI GUPTA 39
MARS Each of the ten letters (m, a, r, s, v, e,
+ VENUS n, u, t, and p) represents a unique number
+ URANUS in the range 0 .. 9.
+ SATURN
------------
NEPTUNE Solution: NEPTUNE = 1078610
M=4, A=5, R=9, etc.
34. Water Jug Problem
• Problem Statement: "You are given two jugs, a
4-gallon one and a 3-gallon one. Neither has
any measuring markers on it. There is a tap
that can be used to fill the jugs with water.
How can you get exactly 2 gallons of water
into the 4-gallon jug?"
SHIWANI GUPTA 40
35. Production Rules
Rules Conclusions
R1:(X, Y | X < 4) → (4, Y) {Fill 4-gallon jug}
R2:(X, Y | Y < 3) →(X, 3) {Fill 3-gallon jug}
R3:(X, Y | X > 0) →(0, Y) {Empty 4-gallon jug}
R4:(X, Y | Y > 0) →(X, 0) {Empty 3-gallon jug}
R5:(X, Y | X+Y >= 4 ΛY > 0) →(4, Y –(4 –X)) {Pour water from 3-gallon jug into 4-
gallon jug until 4-gallon jug is full}
R6:(X, Y | X+Y >= 3 ΛX > 0) →(X –(3 –Y), 3)) {Pour water from 4-gallon jug into 3-
gallon jug until 3-gallon jug is full}
R7:(X, Y | X+Y <= 4 ΛY > 0) → (X+Y, 0) {Pour all water from 3-gallon jug into 4- gallon jug }
R8:(X, Y | X+Y <= 3 ΛX > 0) →(0, X+Y) {Pour all water from 4-gallon jug into 3- gallon jug }
R9:(X, Y | X > 0) → (X –D, Y) {Pour some water D out from 4- gallon jug}
R10:(X, Y | Y > 0) →(X, Y -D) {Pour some water D out from 3- gallon jug}
SHIWANI GUPTA 41
36. SHIWANI GUPTA 42
2 0
Number Rules applied 4-g jug 3-g jug
of steps
1 Initial State 0 0
2 R1 {Fill 4-gallon jug} 4 0
3 R6 {Pour from 4 to 3-g jug until it is full } 1 3
4 R4 {Empty 3-gallon jug} 1 0
5 R8 {Pour all water from 4 to 3-gallon jug} 0 1
6 R1 {Fill 4-gallon jug} 4 1
7 R6 {Pour from 4 to 3-g jug until it is full} 2 3
8 R4 {Empty 3-gallon jug}
Goal State
Solution
37. Water Jug Problem
Given a full 5-gallon jug and a full 2-gallon jug, fill the 2-gallon jug with
exactly one gallon of water.
• State: ?
• Initial State: ?
• Operators: ?
• Goal State: ?
5
2
38. SHIWANI GUPTA 45
Real World problems
• Route Finding applications, such as routing in computer networks,
automated travel advisory systems, and airline travel planning systems.
• Touring and traveling salesperson problem is a famous touring
problem in which each city must be visited exactly once. The aim is to find
the shortest tour.
• VLSI layout with cell layout and channel routing.
• Robot navigation is a generalization of the route-finding problem
described earlier. Rather than a discrete set of routes, a robot can move in a
continuous space with (in principle) an infinite set of possible actions and
states.
• Assembly sequencing of complex objects by a robot, the problem is to
find an order in which to assemble the parts of some object. If the wrong
order is chosen, there will be no way to add some part later in the sequence
without undoing some of the work already done.
• Monkey Banana Problem
39. General Route Finding Problem
• Problem Statement
• States: Locations
• Initial State: Starting Point
• Successor Function(Operators): Move from one location
to other
• Goal Test: Arrive at certain location
• Path Cost: Money, time, travel, comfort, scenery,…
SHIWANI GUPTA 46
40. Travelling Salesman Problem
• Problem Statement
• States: Locations/cities
• Initial State: Starting Point
• Successor Function(Operators): Move from one location
to other
• Goal Test:All locations visited, Agent at initial location
• Path Cost: Distance between locations
SHIWANI GUPTA 47
41. VLSI layout problem
• Problem Statement
• States: Positions of components, wires on chip
• Initial State:
– Incremental
– Complete State
• Successor Function(Operators)
– Incremental
– Complete State
• Goal Test: All components placed and connected as specified
• Path Cost: distance, capacity, no. of connections per component
SHIWANI GUPTA 48
42. Robot Navigation
• Problem Statement
• States: Locations, Position of actuator
• Initial State: Start pos
• Successor Function(Operators): Movement, actions of
actuators
• Goal Test: Task Dependent
• Path Cost: Distance, Energy consumption
SHIWANI GUPTA 49
43. Assembly Sequencing
• Problem Statement
• States: Location of components
• Initial State: No components assembled
• Successor Function(Operators): Place component
• Goal Test: System fully assembled
• Path Cost: Number of moves
SHIWANI GUPTA 50
44. Monkey Banana Problem
• Problem Statement
• States:
• Initial State: Monkey on the floor without bananas
• Successor Function(Operators)
• Goal Test
• Path Cost: The number of moves by monkey to get bananas
SHIWANI GUPTA 51
45. Search Problem
• State space
– each state is an abstract representation of the
environment
– the state space is discrete
• Initial state
• Successor function
• Goal test
• Path cost
46. Search Problem
• State space
• Initial state:
– usually the current state
– sometimes one or several hypothetical states
(“what if …”)
• Successor function
• Goal test
• Path cost
47. Search Problem
• State space
• Initial state
• Successor function:
– [state → subset of states]
– an abstract representation of the possible actions
• Goal test
• Path cost
48. Search Problem
• State space
• Initial state
• Successor function
• Goal test:
– usually a condition
– sometimes the description of a state
• Path cost
49. Search Problem
• State space
• Initial state
• Successor function
• Goal test
• Path cost:
– [path → positive number]
– usually, path cost = sum of step costs
– e.g., number of moves of the empty tile
50. Assumptions in Basic Search
• The environment is static
• The environment is discretizable
• The environment is observable
• The actions are deterministic
51. University Questions
• Solve the following Crypt arithmetic problem:
FORTY
CROSS SEND BASE TEN EAT
ROADS MORE BALL TEN THAT
DANGER MONEY GAMES SIXTY
APPLE
• You are given 2 jugs of cap 4l and 3l each. Neither of the jugs have
any measuring markers on them. There is a pump that can be used to
fill jugs with water. How can u get exactly 2l of water in 4l jug?
Formulate the problem in state space and draw complete diagram
• Consider jugs of volumes 3 and 7 units are available. Show the trace
to measure 2 and 5 units.
• Solve Wolf Goat Cabbage problem.