In this work is presented the design of an omnidirectional robot controlled with a vision system. This design implemented a computational system that follows commands by human gestures and movements interpreted by using image processing. The principal objective of using a system like this, is having a robot capable of recognizing commands through gestures given by the user. The development presents a motor control algorithm using the microcontroller Ti-Tech M4. The main purpose of this design is to build a development platform which can be used for a variety of service applications and also can be used as a platform for experiments made by students who want to learn how a system like this works and have the opportunity to implement new applications and features. The work presented includes original designs for the mechanical structure, the control electronics, and the operational Software. The mechanical structure was built using 6063-T5 aluminum due to its low density. The VREP software is used for creating and interpreting an environment close to reality where the experiments are carried out. The vision system detects a person's image and translates its pose to a line-based structure, then translates it into a command. In this way, an omnidirectional robot was built so it can perform as a service robot in constrained and crowded environments. The detection algorithm is based on a vision system and also backed up by an ultrasonic sensor based system. In the same way, the main purpose of using an omnidirectional robot is to have more mobility in any type of environment, so the function of the robot does not depend on its physical surroundings. As proof of concept was found that the robot works as expected, and it could be reprogrammed by developers who would implement additional applications using this development platform with little or no modifications.