Publications of the Lab


Peter K. Allen, Andrew T. Miller, Paul Y. Oh, Brian S. Leibowitz. Using Tactile and Visual Sensing with a Robotic Hand. To appear in Proceedings IEEE International Conference on Robotics and Automation, Albuquerque, NM, April, 1997.
Most robotic hands are either sensorless or lack the ability to accurately and robustly report position and force information relating to contact. This paper describes a robotic hand system that uses a limited set of native joint position and force sensing along with custom designed tactile sensors and real-time vision modules to accurately compute finger contacts and applied forces for grasping tasks. Three experiments are described: integration of real-time visual trackers in conjunction with internal strain gauge sensing to correctly localize and compute finger forces, determination of contact points on the inner and outer links of a finger through tactile sensing, and determination of vertical displacement by tactile sensing for a grasping task.
Billibon H. Yoshimi and Peter Allen. Closed-Loop Visual Grasping and Manipulation. Submitted to IEEE International Conference on Robotics and Automation, Minneapolis, MN, April, 1996.
Sensors play an important role in providing feedback to robotic control systems. Vision can be an effective sensing modality due to its speed, low cost, and flexibility. It can also serve as an external sensor that can provide control information for devices that lack internal sensing. Many robot grippers do not have internal sensors. These grippers can be prone to error as they operate under open loop control and usually require a precise model of the environment to be effective. This paper describes 2 visual control primitives that can be used to provide position control for a sensorless robot system. The primitives use a simple visual tracking and correspondence scheme to provide real-time feedback control in the presence of imprecise camera calibrations. Experimental results are shown for the positioning task of locating, picking up, and inserting a bolt into a nut under visual control. Results are also presented for the visual control of a bolt tightening task.
Michael Reed and Peter K. Allen. Automated Model Acquisition using Volumes of Occlusion. Submitted to IEEE International Conference on Robotics and Automation, Minneapolis, MN, April, 1996.
Two primary requirements of any system that relies on active vision to perform modeling from observation are that it be able to use previously acquired data to drive the sensing process and that it can iteratively incorporate newly sensed data. This paper discusses an approach to automating CAD model acquisition by allowing the system to keep track of what parts of the sensed object have yet to be imaged. This is achieved by explicitly representing the volume of occlusion as well as its surfaces of the object obtained from any one sensing operation. Models built from distinct views of the object, and which include their volume of occlusion, are merged using set operations. The resulting composite model consists of the visible surfaces from each model along with the intersection of the volumes of occlusion and contains the necessary information for planning the next viewpoint.
Steven Abrams and Peter K. Allen. Swept Volumes and Their Use in Viewpoint Computation in Robot Work-Cells. To appear Proceedings IEEE International Symposium on Assembly and Task Planning, Pittsburgh, PA, August, 1995.
Steven Abrams and Peter K. Allen. Computing Swept Volumes for Sensor Planning Tasks. In Proceedings Proceedings 1994 DARPA Image Understanding Workshop.
Steven Abrams, Peter K. Allen, and Konstantinos A. Tarabanis. Dynamic sensor planning. In International Conference on Intelligent Autonomous Systems, Pittsburgh, PA, February 1993.
Steven Abrams, Peter K. Allen, and Konstantinos A. Tarabanis. Dynamic sensor planning. In Proceedings 1993 IEEE International Conference on Robotics and Automation, Atlanta, GA, May 1993. Also in Proceedings DARPA 1993 Image Understanding Workshop.
In this paper, we describe a method of extending the sensor planning abilities of the ``MVP'' Machine Vision Planning system to plan viewpoints for monitoring a preplanned robot task. The dynamic sensor planning system presented here analyzes geometric models of the environment and of the planned motions of the robot, as well as optical models of the vision sensor. Using a combination of swept volumes and a temporal interval search technique, it computes a series of viewpoints, each of which provides a valid viewpoint for a different interval of the task. By mounting a camera on another manipulator, the viewpoints can be executed at appropriate times during the task so that there is always a robust view suitable for monitoring the task. Experimental results monitoring a simulated robot operation are presented, and directions for future research are discussed.
Steven Abrams and Peter K. Allen. Sensor planning in an active robotic work cell. In Proceedings SPIE Intelligent Robotic Systems Conference on Sensor Fusion IV: Control Paradigms and Data Structures, Boston, MA, November 1991. Also in Proceedings DARPA 1992 Image Understanding Workshop.
P. Allen, A. Timcenko, B. Yoshimi, and P. Michelman. Automated tracking and grasping of a moving object with a robotic hand-eye system. IEEE Trans. on Robotics and Automation, 9(2):152-165, 1993.
Most robotic grasping tasks assume a stationary or fixed object. In this paper, we explore the requirements for tracking and grasping a moving object. The focus of our work is to achieve a high level of interaction between a real-time vision system capable of tracking moving objects in 3-D and a robot arm equipped with a dexterous hand that can be used pick up a moving object. We are interested in exploring the interplay of hand-eye coordination for dynamic grasping tasks such as grasping of parts on a moving conveyor system, assembly of articulated parts or for grasping from a mobile robotic system. Coordination between an organism's sensing modalities and motor control system is a hallmark of intelligent behavior, and we are pursuing the goal of building an integrated sensing and actuation system that can operate in dynamic as opposed to static environments. The system we have built addresses three distinct problems in robotic hand-eye coordination for grasping moving objects: fast computation of 3-D motion parameters from vision, predictive control of a moving robotic arm to track a moving object, and grasp planning. The system is able to operate at approximately human arm movement rates, and we present experimental results in which a moving model train is tracked, stably grasped, and picked up by the system. The algorithms we have developed that relate sensing to actuation are quite general and applicable to a variety of complex robotic tasks that require visual feedback for arm and hand control.
R. T. Farouki, K. Tarabanis, J. U. Korein, J. S. Batchelder, and S. R. Abrams. Offset curves in layered manufacturing. In Proceedings of the 1994 International Mechanical Engineering Congress and Exposition (IMECE). November, 1994.
Paul Michelman and Peter Allen. Forming complex dextrous manipulations from task primitives. In 1994 IEEE International Conference on Robotics &Automation, volume 4, pages 3383-3388, San Diego, CA, May 1994.
This paper discusses the implementation of complex manipulation tasks with a dextrous hand. The approach used is to build a set of primitive manipulation functions and combine them to form complex tasks. Only fingertip, or precision, manipulations are considered. Each function performs a simple two-dimensional translation or rotation that can be generalized to work with objects of different sizes and using different grasping forces. Complex tasks are sequential combinations of the primitive functions. They are formed by analyzing the workspaces of the individual tasks and controlled by finite state machines. We present a number of examples, including a complex manipulation---removing the top of a child-proof medicine bottle---that incorporates different hybrid position/force specifications of the primitive functions of which it is composed. The work has been implemented with a robot hand system using a Utah-MIT hand.
Paul Michelman and Peter Allen. Shared autonomy in a robot hand teleoperation system. In 1994 IEEE/RSJ/GI International Conference on Intelligent Robots and Systems (IROS '94), [to appear, 1994].
This paper considers adding autonomy to robot hands used in teleoperation systems. Currently, the finger positions of robot hands in teleoperation systems are controlled via a robot master using a Dataglove or exoskeleton. There are several difficulties with this approach: accurate calibration is hard to achieve; robot hands have different capabilities from human hands; and complex force reflection is difficult. In this paper, we propose a model of hand teleoperation in which the input device commands the motions of a grasped object rather than the joint displacements of the fingers. To achieve this goal, the hand requires greater autonomy and the capability to perform high-level functions with minimal external input. Therefore, a set of general, primitive manipulation functions that can be performed automatically is defined. These elementary functions control simple rotations and translations of the grasped objects. They are incorporated into a teleoperation system by using a simple input device as a control signal. Preliminary implementations with a Utah/MIT are discussed.
Konstantinos Tarabanis, Roger Y. Tsai, and Steven Abrams. Planning viewpoints that simultaneously satisfy several feature detectability constraints for robotic vision. In Proceedings Fifth Inernational Conference of Advanced Robotics, 1991.
Aleksandar Timcenko, Steven Abrams, and Peter K. Allen. APHRODITE: Intelligent planning, control and sensing in a distributed robotic system. In International Conference on Intelligent Autonomous Systems, Pittsburgh, PA, February 1993.
In this paper we describe a general-purpose robot programming environment built at Columbia University's Center for Research in Intelligent Systems. The environment is based on a distributed multi-processor architecture that offers great flexibility and computing strength, and provides a reliable real-time response, while using off-the-shelf software tools. The supporting software that we developed supports transparent operation across the network and distributed interupt handling, We define the interface between main hierarchical levels in the system and describe the implementation as it has been used to create a distributed, multi-tasking application. We have implemented a test task that consists of assembly operation that includes compliant motion planning in an uncertain environment and the intelligent monitoring of the assembly process.
Billibon H. Yoshimi and Peter K. Allen. Active, uncalibrated visual servoing. In 1994 IEEE International Conference on Robotics &Automation, volume 4, pages 156-161, San Diego, CA, May 1994.
We propose a method for visual control of a robotic system which does not require the formulation of an explicit calibration between image space and the world coordinate system. Calibration is known to be a difficult and error prone process. By extracting control information directly from the image, we free our technique from the errors normally associated with a fixed calibration. We demonstrate this by performing a peg-in-hole alignment using an uncalibrated camera to control the positioning of the peg. The algorithm utilizes feedback from a simple geometric effect, rotational invariance, to control the positioning servo loop. The method uses an approximation to the Image Jacobian to provide smooth, near-continuous control.
Billibon H. Yoshimi and Peter K. Allen. Visual Control of Grasping and Manipulation Tasks, In 1994 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems Las Vegas, NV, Oct 2-5, 1994.
This paper discusses the problem of visual control of grasping. We have implemented an object tracking system that can be used to provide visual feedback for locating the positions of fingers and objects to be manipulated, as well as the relative relationships of them. This visual analysis can be used to control open loop grasping systems in a number of manipulation tasks where finger contact, object movement, and task completion need to be monitored and controlled.

Return to the Robotics Lab home page