Çarşamba, 14 Kasım 2018 / Published in Uncategorized

Introduction

The Rodney Robot project is a hobbyist robotic project to design and build an autonomous house-bot using ROS (Robot Operating System). This article is the third in the series describing the project.

Background

In part 1 to help define the requirements for our robot we selected our first mission and split it down into a number of Design Goals to make it more manageable.

The mission was taken from the article Let’s build a robot! and was: Take a message to… – Since the robot will [have] the ability to recognize family members, how about the ability to make it the ‘message taker and reminder’. I could say ‘Robot, remind (PersonName) to pick me up from the station at 6pm’. Then, even if that household member had their phone turned on silent, or were listening to loud music or (insert reason to NOT pick me up at the station), the robot could wander through the house, find the person, and give them the message.

The design goals for this mission were:

  • Design Goal 1: To be able to look around using the camera, search for faces, attempt to identify any people seen and display a message for any identified
  • Design Goal 2: Facial expressions and speech synthesis. Rodney will need to be able to deliver the message
  • Design Goal 3: Locomotion controlled by a remote keyboard and/or joystick
  • Design Goal 4: Addition of a laser ranger finder or similar ranging sensor used to aid navigation
  • Design Goal 5: Autonomous locomotion
  • Design Goal 6: Task assignment and completion notification

In the previous parts of the article we completed Design Goal 1 and 2. In this part I’m going to introduce a state machine package and write two nodes which will be used to control the robot missions and jobs. To start bringing it all together we will add a 2nd mission which makes use of Design Goal 1 and 2.

A complex plan

smach

When we are finally ready to bring all these Design Goals together it’s going to require a complex system to order and control all the various parts of the system. To do this we are going to use a ROS Python library called smach. The package documentation is available on the ROS Wiki website smach.

With smach we can develop a hierarchical state machine where we add a lower level state machine for each mission we add. 

Gluing Design Goal 1 and 2 together

Although our overall aim is what we have defined as Mission 1, it would be nice to start working on this control mechanism. What we can do is combine Design Goal 1 and 2 into another smaller mission (Mission 2) which is required to search for recognised faces within the head movement range and speak a greeting to anyone that the robot recognises. The processes used for Mission 2 will also form part of Mission 1 when it is complete.

To complete Mission 2 in this article we are going to write two nodes. The first node, rodney_missions will contain the code for the state machine making up the missions and jobs. The second node rodney, will be used to control when various missions and jobs are started. We will also take the opportunity to add some functionality for reading the keyboard and a game controller which will be used in Design Goal 3.

Now I’m fully aware that I have introduced a new term there alongside "missions" and that is "jobs". A job will be a task that the robot needs to carry out but is not as complex as a full mission. The node running the state machines is the best place for these "jobs" as they may require the same resources as the more complex missions. For example the mission 1 state machine is required to request the movement of the head/camera but we may also want to be able to move the head/camera manually. Although it’s fine to have two nodes subscribing to a topic it’s considered bad practice to have two nodes publishing the same topic. So we will avoid this by having one node to action the "missions" and "jobs".

Up to now I have kept the node names generic and not named them after this particular robot. This was so that the nodes could be used in other projects, however these two nodes will be particular to this robot so they are named after it.

State machine

We will start with the package and node which contains the state machine that controls the different missions and jobs that the robot is capable of. As stated above smach is a Python library so our package will be written in Python.

Our ROS package for the node is called rodney_missions and is available in the rodney_missions folder. The src folder contains the rodney_missions_node.py file, which contains the main code for the node. The src folder also contains a sub folder called missions_lib, each of the robot missions we add to Rodney will result in a Python class which will be contained in this folder. Here we are going to work on Mission 2 and the code for that is in the greet_all.py file.

The rodney_missions_node.py file will contain the code to register the node and will also contain the high level state machine which accepts each mission and job. The greet_all.py file will contain part of the sub state machine for mission 2. Each time we add a new mission to the robot we will add a sub state machine for that mission.

The diagram below shows our state machine.

The WAITING state is a special type of state called a MonitorState and simply monitors the /missions/mission_request topic. When a message is received on this topic it will extract the request and any parameters that go with the request and then transit to the PREPARE state passing on the request data.

The PREPARE state will carry out any ‘Job’ requests and then transit back to the waiting state. If the request was to carryout Mission 2 then it will transit to the sub state machine MISSION2. 

The SCANNING state is another special state called a SimpleActionState. If you look back to part 2 of these articles we wrote an action server in the head_control node. This action server was responsible for coordinating the head movement and when to attempt to do the face recognition of the captured image. At the time we wrote an action client in a piece of test code so that we could test the functionality. This state will replace the action client for this action. As we develop the robot and want to move the head for other missions, we might remove the head_control node and move the functionality into the rodney_missions node. For now I’m leaving it as it is as an example of how to use the SimpleActionState.

If the action is completed successfully the state machine will transit to the GREETING state where a spoken greeting for all the individuals recognised will be generated. The state machine will then transit to the REPORT state.

The REPORT state simply sends the mission complete message on the /missions/mission_complete topic and transits back to the WAITING state.

Before I explain the code it is worth stating what is contained in the /missions/mission_request topic. It is of type std_msgs/String and contains an ID for the Mission or the Job and depending on the ID zero or more parameters separated by the ‘^’ character.

Currently the IDs and parameters are as follows

  • "M2" This ID is a request to conduct Mission 2 and there are no parameters associated with it.
  • "J1" Is a request to conduct Job 1. This job is to playback the supplied wav file and to pass matching text to the robot face to animate the lips. The first parameter following the ID is the wav file name and the second parameter is the matching text.
  • "J2" Is a request to conduct Job 2. This Job is to speak the supplied text and to pass the matching text to the robot face to animate the lips. The first parameter is the text to speak and the second parameter is the matching text. Remember these are separate as the text for the robot face may contain smileys for the robot face expression.
  • "J3" Is a request to conduct Job 3. This Job is to move the position of the head/camera. The first parameter will contain the letter ‘u’ if the camera is to be moved up, ‘d’ if the camera is to be moved down, or ‘-‘ if the camera is not to be moved up or down. The second parameter contains ‘l’ if the camera is to be moved left, ‘r’ if the camera is to be moved right, or ‘-‘ if the camera is not to be moved left to right. 

I’ll now briefly describe the code starting with the rodney_missions_node.py file.

The main function registers our node with ROS and creates an instance of the RodneyMissionsNode class.

def main(args):
    rospy.init_node('rodney_missions_node', anonymous=False)
    rospy.loginfo("Rodney missions node started")
    rmn = RodneyMissionsNode()        

if __name__ == '__main__':
    main(sys.argv)

The class constructor for RodneyMissionsNode sets up to call ShutdownCallback if the node is shutdown, and subscribes to the /missions/mission_cancel topic. It then creates each state and adds the states to the state machine. This includes creating the two special type of states the MonitorState and the SimpleActionState.

We then create and start an introspective server. This is not required for the robot to operate but allows you to run a tool called smach_viewer. This tool can help to debug any problems with your state machine and was used to produce the state diagram above.

The constructor then starts the execution of the state machine and hands control over to ROS.

There are three other functions in the RodneyMissionsNode class.

MissionsRequestCB is the function called by the MonitorState WAITING when a message is received on the /missions/mission_request topic. This extracts the data from the message and copies it to userdata which is a process for passing data between states. It then returns False so that the state machine will transit to the PREPARE state.

CancelCallback is the callback function called if a message is received on the /missions/mission_cancel topic. This will result in SCANNING state transiting back to WAITING should the state machine be in that state at the time.

ShutdownCallback is the callback function called if the node receives a command from ROS to shutdown. It again will cancel the action associated with the SCANNING state.

# Top level state machine. The work for each mission is another state machine in the 'mission' states        
class RodneyMissionsNode:

    def __init__(self):
        rospy.on_shutdown(self.ShutdownCallback)
        
        # Subscribe to message to cancel missions        
        self.__cancel_sub = rospy.Subscriber('/missions/mission_cancel', Empty, self.CancelCallback)
        
        # Create top level state machine
        self.__sm = StateMachine(['preempted'])
        with self.__sm:
            # Add the first state which monitors for a mission to run
            StateMachine.add('WAITING',
                             MonitorState('/missions/mission_request',
                             String,
                             self.MissionRequestCB,
                             output_keys = ['mission']),
                             transitions={'valid':'WAITING', 'invalid':'PREPARE', 'preempted':'preempted'}) 
            # Add state to prepare the mission
            StateMachine.add('PREPARE',
                             Prepare(),
                             transitions={'mission2':'MISSION2','done_task':'WAITING'})
            # Add the reporting state
            StateMachine.add('REPORT',
                             Report(),
                             transitions={'success':'WAITING'})
                             
            # Create a sub state machine for mission 2 - greeting
            self.__sm_mission2 = StateMachine(['success', 'aborted', 'preempted'])
            
            with self.__sm_mission2:
                goal_scan = scan_for_facesGoal()                
                StateMachine.add('SCANNING',
                                 SimpleActionState('head_control_node',
                                                   scan_for_facesAction,
                                                   goal=goal_scan,
                                                   result_slots=['detected']),                                 
                                 transitions={'succeeded':'GREETING', 'aborted':'aborted', 'preempted':'preempted'})
                StateMachine.add('GREETING',
                                 missions_lib.Greeting(),                                 
                                 transitions={'success':'success'})
                                 
            # Now add the sub state machine (for mission 2) to the top level one
            StateMachine.add('MISSION2', 
                             self.__sm_mission2, 
                             transitions={'success':'REPORT', 'aborted':'WAITING', 'preempted':'WAITING'}) 
        
        # Create and start the introspective server so that we can use smach_viewer
        sis = IntrospectionServer('server_name', self.__sm, '/SM_ROOT')
        sis.start()
                             
        self.__sm.execute()
        
        # Wait for ctrl-c to stop application
        rospy.spin()
        sis.stop()
        
    
    # Monitor State takes /missions/mission_request topic and passes the mission in user_data to the PREPARE state
    def MissionRequestCB(self, userdata, msg):                
        # Take the message data and send it to the next state in the userdata
        userdata.mission = msg.data;       
                        
        # Returning False means the state transition will follow the invalid line
        return False
        
    # Callback for cancel mission message
    def CancelCallback(self, data):
        # List all sub state machines which can be preempted
        self.__sm_mission2.request_preempt()
        
    def ShutdownCallback(self):        
        self.__sm.request_preempt()
        # Although we have requested to shutdown the state machine 
        # it will not happen if we are in WAITING until a message arrives

The rodney_missions_node.py file also contains classes that make up the PREPARE and REPORT states.

The class Prepare contains a constructor which declares which state follows PREPARE, what data is passed to it and advertises that it will publish messages on the topics /speech/to_speak, /robot_face/text_out and /head_control_node/manual.

The class also contains an execute function which is run when the state is entered. This function examines the request message, carries out any Jobs it can and then transits to the WAITING state or transits to the SCANNING state if Mission 2 is requested.

# The PREPARE state
class Prepare(State):
    def __init__(self):
        State.__init__(self, outcomes=['mission2','done_task'], input_keys=['mission'])
        self.__speech_pub_ = rospy.Publisher('/speech/to_speak', voice, queue_size=5)
        self.__text_out_pub = rospy.Publisher('/robot_face/text_out', String, queue_size=5)
        self.__man_head = rospy.Publisher('/head_control_node/manual', String, queue_size=1)
    
    def execute(self, userdata):        
        # Based on the userdata either change state to the required mission or carry out single job
        # userdata.mission contains the mission or single job and a number of parameters seperated by '^'
        retVal = 'done_task';
        
        # Split into parameters using '^' as the delimiter
        parameters = userdata.mission.split("^")
        
        if parameters[0] == 'M2':
            # Mission 2 is scan for faces and greet those known, there are no other parameters with this mission request
            retVal = 'mission2'
        elif parameters[0] == 'J1':
            # Simple Job 1 is play a supplied wav file and move the face lips
            voice_msg = voice()
            voice_msg.text = ""
            voice_msg.wav = parameters[1]            
            # Publish topic for speech wav and robot face animation
            self.__speech_pub_.publish(voice_msg)
            self.__text_out_pub.publish(parameters[2])
        elif parameters[0] == 'J2':
            # Simple Job 2 is to speak the supplied text and move the face lips
            voice_msg = voice()
            voice_msg.text = parameters[1]
            voice_msg.wav = ""
            # Publish topic for speech and robot face animation
            self.__speech_pub_.publish(voice_msg)
            self.__text_out_pub.publish(parameters[2])
        elif parameters[0] == 'J3':
            # Simple Job 3 is to move the head/camera. This command will only be sent in manual mode. The resultant
            # published message will only be sent once per received command.
            # parameters[1] will either be 'u', 'd', 'c' or '-'
            # parameters[2] will either be 'l', 'r' or '-'             
            self.__man_head.publish(parameters[1]+parameters[2])
        return retVal

The class Report contains a constructor which declares which state follows REPORT and advertises that it will publish a message on the /missions/mission_complete topic. 

The class also contains an execute function which is run when the state is entered. This function simply publishes the message for the /missions/mission_complete topic.

# The REPORT state
class Report(State):
    def __init__(self):
        State.__init__(self, outcomes=['success'])
        self.__pub = rospy.Publisher('/missions/mission_complete', String, queue_size=5)
    
    def execute(self, userdata):        
        # Publishes message that mission completed
        self.__pub.publish("Mission Complete")
        return 'success'  

The only state we now need to write code for is the GREETING state.

This is in the greet_all.py file and contains the Greeting class. The constructor declares the state to follow it, what data is passed to the state and that it will publish on the topics /speech/to_speak and /robot_face/text_out.

The class also contains an execute function which is run when the state is entered. This function constructs and publishes the two topics based on the data passed to it.

# Greeting State
class Greeting(State):
    def __init__(self):
        State.__init__(self, outcomes=['success'],
                       input_keys=['detected'])
        self.__speech_pub_ = rospy.Publisher('/speech/to_speak', voice, queue_size=5)
        self.__text_out_pub = rospy.Publisher('/robot_face/text_out', String, queue_size=5)        
    
    def execute(self, userdata):        
        # userdata.detected.ids_detected is the IDs of those detected
        # userdata.detected.names_detected is the name of thise detected
        
        # Construct greeting
        greeting = ''
        if len(userdata.detected.names_detected) == 0:
            greeting = 'No one recognised'
        else:
            greeting = 'Hello '
            for n in userdata.detected.names_detected:
                greeting += n + ' '
                
            greeting += 'how are you '
            
            if len(userdata.detected.names_detected) == 1:
                greeting += 'today'
            elif len(userdata.detected.names_detected) == 2:
                greeting += 'both'
            else:
                greeting += 'all'
            
        rospy.loginfo(greeting)
        
        voice_msg = voice()
        voice_msg.text = greeting
        voice_msg.wav = ""
        
        # Publish topic for speech and robot face animation
        self.__speech_pub_.publish(voice_msg)
        self.__text_out_pub.publish(greeting + ":)")
        
        return 'success'

Top level control

The rodney node will be responsible for the top level control of the robot.

Our ROS package for the node is called rodney and is available in the rodney folder. The package contains all the usual ROS files and folders plus a few extra.

The config folder contains a config.yaml file which can be used to override some of the default configuration values. You can configure:

  • The game controller axis which is used for moving the robot forward and backward in manual locomotion mode
  • The game controller axis which is used for moving the robot clockwise and anti-clockwise in manual locomotion mode
  • The game controller axis which will be used for moving the head/camera up and down in manual locomotion mode
  • The game controller axis which will be used for moving the head/camera left and right in manual locomotion mode
  • The game controller button which will be used for selecting manual locomotion mode
  • The game controller button which will be used for moving the head/camera back to the default position
  • The game controller axes dead zone value
  • The linear velocity which is requested when the controller axis is at its maximum range
  • The angular velocity which is requested when the controller axis is at its maximum range
  • The ramp rate used to increase or decrease the linear velocity
  • The ramp rate used to increase or decrease the angular velocity
  • The battery voltage level that a low battery warning will be issued at
  • Enable/disable the wav file playback functionality when the robot is inactive
  • A list of wav filenames to play from when the robot is inactive
  • A list of speeches to use when playing the wav files names

The launch folder contains two launch files, rodney.launch and rviz.launch. The rodney.launch file is used to load all the configuration files, covered in the first three articles, into the parameter server and to start all the nodes that make up the robot project. It is similar to the launch files used so far in the project except it now includes the rodney_node and the rodney_missions_node. rviz is a 3D visualization tool for ROS which can be used to visualise data including the robot position and pose. Documentation for rviz is available on the ROS Wiki website. The rviz.launch file along with the meshes, rviz and urdf folders can be used for visualising Rodney. We will use the urdf model of Rodney to do some testing on a simulated Rodney robot.

The image below shows a visualisation of Rodney in rviz.

The rodney_control folder is just a convenient place to store the Ardunio file that was discussed in part 1.

The sounds folder is used to hold any wav files that the system is required to play. How to play these files and at the same time animate the robot face was covered in part 3.

The include/rodney and src folders contain the C++ code for the package. For this package we have one C++ class, RodneyNode, and a main routine contained within the rodney_node.cpp file.

The main routine informs ROS of our node, creates an instance of the node class and passes it the node handle.

Again we are going to do some processing of our own in a loop so instead of passing control to ROS with a call to ros::spin we are going to call ros::spinOnce to handle the transmitting and receiving of the topics. The loop will be maintained at a rate of 20Hz, this is setup by the call to ros::rate and the timing is maintained by the call to r.sleep within the loop.  

Our loop will continue while the call to ros::ok returns true, it will return false when the node has finished shutting down e.g. when you press Ctrl-c on the keyboard.

In the loop we will call sendTwist and checkTimers which are described later in the article.

int main(int argc, char **argv)
{   
    ros::init(argc, argv, "rodney");
    ros::NodeHandle n;    
    RodneyNode rodney_node(n);   
    std::string node_name = ros::this_node::getName();
	ROS_INFO("%s started", node_name.c_str());
	
	ros::Rate r(20); // 20Hz	    
    
    while(ros::ok())
    {
        rodney_node.sendTwist();
        rodney_node.checkTimers();
        
        ros::spinOnce();
        r.sleep();
    }
    
return 0;    
}

The constructor for our class starts by setting default values for the class parameters. For each of the parameters which are configurable using the ROS parameter server, a call is made to either param or getParam. The difference between these two calls is that with param the default value passed to the call is used if the parameter is not available in the parameter server.

We next subscribe to the topics that the node is interested in.

  • keyboard/keydown to obtain key presses from a keyboard. These key presses are generated from a remote PC to control the robot in manual mode
  • joy to obtain joystick/game pad controller input, again to control the robot from a remote PC
  • missions/mission_complete so that the node is informed when the current robot mission is completed
  • main_battery_status this will be used later in the project to receive the state of the robots main battery
  • demand_vel this will be used later in the project to receive autonomous velocity demands

Next in the constructor is the advertisement of the topics which this node will publish.

  • /robot_face/expected_input this topic was discussed in part 3 of these articles and is used to display a status below the robot face. We will use it to show the status of the main battery
  • /missions/mission_request this will be used to pass requested missions and jobs on to the state machine node 
  • /missions/mission_cancel this can be used to cancel the current ongoing mission
  • /cmd_vel this will be used later in the project to send velocity commands to the node responsible for driving the electric motors. The requested velocities will either be from the autonomous subsystem or as a result of keyboard/joystick requests when in manual mode

Finally the constructor sets a random generator seed and obtains the current time. The use of the random number generator and the time is discussed in the section on the checkTimers method.

// Constructor 
RodneyNode::RodneyNode(ros::NodeHandle n)
{
    nh_ = n;
    
    linear_mission_demand_ = 0.0f;
    angular_mission_demand_ = 0.0f;
    
    manual_locomotion_mode_ = false;
    linear_set_speed_ = 0.5f;
    angular_set_speed_ = 1.0f;
    
    linear_speed_index_ = 0;
    angular_speed_index_ = 1;
    manual_mode_select_ = 0;
    
    camera_x_index_ = 2;
    camera_y_index_ = 3;
    default_camera_pos_select_ = 1;
    
    max_linear_speed_ = 3;
    max_angular_speed_ = 3;
    
    dead_zone_ = 2000;
    
    ramp_for_linear_ = 5.0f;
    ramp_for_angular_ = 5.0f;
    
    voltage_level_warning_ = 9.5f; 
    
    wav_play_enabled_ = false;  
    
    // Obtain any configuration values from the parameter server. If they don't exist use the defaults above
    nh_.param("/controller/axes/linear_speed_index", linear_speed_index_, linear_speed_index_);
    nh_.param("/controller/axes/angular_speed_index", angular_speed_index_, angular_speed_index_);
    nh_.param("/controller/axes/camera_x_index", camera_x_index_, camera_x_index_);
    nh_.param("/controller/axes/camera_y_index", camera_y_index_, camera_y_index_);
    nh_.param("/controller/buttons/manual_mode_select", manual_mode_select_, manual_mode_select_);
    nh_.param("/controller/buttons/default_camera_pos_select", default_camera_pos_select_, default_camera_pos_select_);
    nh_.param("/controller/dead_zone", dead_zone_, dead_zone_);
    nh_.param("/teleop/max_linear_speed", max_linear_speed_, max_linear_speed_);
    nh_.param("/teleop/max_angular_speed", max_angular_speed_, max_angular_speed_);
    nh_.param("/motor/ramp/linear", ramp_for_linear_, ramp_for_linear_);
    nh_.param("/motor/ramp/angular", ramp_for_angular_, ramp_for_angular_);
    nh_.param("/battery/warning_level", voltage_level_warning_, voltage_level_warning_);    
    nh_.param("/sounds/enabled", wav_play_enabled_, wav_play_enabled_);
    
    // Obtain the filename and text for the wav files that can be played    
    nh_.getParam("/sounds/filenames", wav_file_names_);
    nh_.getParam("/sounds/text", wav_file_texts_);
     
    // Subscribe to receive keyboard input, joystick input, mission complete and battery state
    key_sub_ = nh_.subscribe("keyboard/keydown", 5, &RodneyNode::keyboardCallBack, this);
    joy_sub_ = nh_.subscribe("joy", 1, &RodneyNode::joystickCallback, this);
    mission_sub_ = nh_.subscribe("/missions/mission_complete", 5, &RodneyNode::completeCallBack, this);
    battery_status_sub_ = nh_.subscribe("main_battery_status", 1, &RodneyNode::batteryCallback, this);
    
    // The cmd_vel topic below is the command velocity message to the motor driver.
    // This can be created from either keyboard or game pad input when in manual mode or from the thi subscribed
    // topic when in autonomous mode.
    demmand_sub_ = nh_.subscribe("demand_vel", 5, &RodneyNode::motorDemandCallBack, this);

    // Advertise the topics we publish
    face_status_pub_ = nh_.advertise<std_msgs::String>("/robot_face/expected_input", 5);
    mission_pub_ = nh_.advertise<std_msgs::String>("/missions/mission_request", 10);
    cancel_pub_ = nh_.advertise<std_msgs::Empty>("/missions/mission_cancel", 5);
    twist_pub_ = nh_.advertise<geometry_msgs::Twist>("cmd_vel", 1);
    
    // Seed the random number generator
    srand((unsigned)time(0));
    
    last_interaction_time_ = ros::Time::now();
}

I’ll now briefly describe the functions that make up the class.

The joystickCallback is called when a message is received on the Joy topic. The data from the joystick/game pad controller can we used to move the robot around and to move the head/camera when in manual mode.

Data from the joystick is in two arrays, one contains the current position of each axes and the other the current state of the buttons. Which axis and which button are used is configurable by setting the index value in the parameter server.

The function first reads the axes that control the angular and linear speed of the robot. These values are compared to a dead zone value which dictates how much the axes must be moved before the value is used to control the robot. The values from the controller are then converted to values that can be used for linear and velocity demands. This will mean that the maximum possible value received from the controller will result in a demand of the robots top speed. These values are stored and will be used in the sendTwist method.

Next the axes used for controlling the movement of the head/camera in manual mode are read, again a dead zone is applied to the value. If the robot is in manual locomotion mode the values are sent as a "J3" job to the rondey_mission_node

Next the button values are checked. Again the index of the button used for each function can be configured. One button is used to put the robot in manual locomotion mode, which if a robot mission is currently running results in a request to cancel the mission. The second button is used as a quick way of returning the head/camera to the default position.

void RodneyNode::joystickCallback(const sensor_msgs::Joy::ConstPtr& msg)
{
    float joystick_x_axes;
    float joystick_y_axes;
            
    // manual locomotion mode can use the joystick/game pad
    joystick_x_axes = msg->axes[angular_speed_index_];
    joystick_y_axes = msg->axes[linear_speed_index_];
        
    // Check dead zone values   
    if(abs(joystick_x_axes) < dead_zone_)
    {
        joystick_x_axes = 0;
    }
    
    if(abs(joystick_y_axes) < dead_zone_)
    {
        joystick_y_axes = 0;
    }    
    
    // Check for manual movement
    if(joystick_y_axes != 0)
    {      
        joystick_linear_speed_ = -(joystick_y_axes*(max_linear_speed_/(float)MAX_AXES_VALUE_));
        last_interaction_time_ = ros::Time::now();
    }
    else
    {
        joystick_linear_speed_ = 0;
    }
    
    if(joystick_x_axes != 0)
    {
        joystick_angular_speed_ = -(joystick_x_axes*(max_angular_speed_/(float)MAX_AXES_VALUE_));
        last_interaction_time_ = ros::Time::now();
    }
    else
    {
        joystick_angular_speed_ = 0;
    }
    
    // Now check the joystick/game pad for manual camera movement               
    joystick_x_axes = msg->axes[camera_x_index_];
    joystick_y_axes = msg->axes[camera_y_index_];
    
    // Check dead zone values   
    if(abs(joystick_x_axes) < dead_zone_)
    {
        joystick_x_axes = 0;
    }
    
    if(abs(joystick_y_axes) < dead_zone_)
    {
        joystick_y_axes = 0;
    }  
    
    if(manual_locomotion_mode_ == true)
    {
        if((joystick_x_axes != 0) || (joystick_y_axes != 0))
        {
            std_msgs::String mission_msg;   
            mission_msg.data = "J3^";
        
            if(joystick_y_axes == 0)
            {
                mission_msg.data += "-^";
            }
            else if (joystick_y_axes > 0)
            {
                mission_msg.data += "u^";
            }
            else
            {
                mission_msg.data += "d^";        
            }
        
            if(joystick_x_axes == 0)
            {
                mission_msg.data += "-";
            }
            else if (joystick_x_axes > 0)
            {
                mission_msg.data += "r";
            }
            else
            {
                mission_msg.data += "l";        
            }
        
            mission_pub_.publish(mission_msg);
            
            last_interaction_time_ = ros::Time::now();
        }
    }
    
    // Button on controller selects manual locomotion mode
    if(msg->buttons[manual_mode_select_] == 1)
    {
        if(mission_running_ == true)
        {
            // Cancel the ongoing mission
            std_msgs::Empty empty_msg;
            cancel_pub_.publish(empty_msg);                        
        }
        
        // Reset speeds to zero           
        keyboard_linear_speed_ = 0.0f; 
        keyboard_angular_speed_ = 0.0f;
        
        manual_locomotion_mode_ = true;
        
        last_interaction_time_ = ros::Time::now(); 
    }
    
    // Button on controller selects central camera position   
    if((manual_locomotion_mode_ == true) && (msg->buttons[default_camera_pos_select_] == 1))
    {            
        std_msgs::String mission_msg;
        mission_msg.data = "J3^c^-";
        mission_pub_.publish(mission_msg);
        
        last_interaction_time_ = ros::Time::now();
    }
}

The keyboardCallBack is called when a message is received on the keyboard/keydown topic. The key presses can be used to move the robot around and to move the head/camera when in manual mode.

The data in the message is checked to see if it corresponds to a key that we are interested in.

The number keys are used to select robot missions. Currently we are interested in mission 2, so if the ‘2’ key is pressed the code publishes the request on the /missions/mission_request topic with the "M2" ID.

The ‘C’ key is used to request that the current mission be cancelled, this is done by sending a message on the /missions/mission_cancel topic.

The ‘D’ key is used to move the camera/head back to the default position if the robot is in manual locomotion mode.

The ‘M’ key is used to put the robot in manual locomotion mode. If a mission is currently in progress a request to cancel the mission is also sent.

The keyboard numeric keypad is used to control movement of the robot when in manual locomotion mode. For example key ‘1’ will result in linear velocity in the reverse direction plus angular velocity in the ant-clockwise direction. The amount of velocity is set by the current values in linear_set_speed_ and angular_set_speed_ variables. The speed of the robot can be increased or decreased by the use of the ‘+’, ‘-‘, ‘*’ and ‘/’ keys on the numeric keypad. The ‘+’ key will increase the robot linear velocity by 10% whilst the ‘-‘ key will decrease the linear velocity by 10%. The ‘*’ increases the angular velocity by 10% and the ‘/’ key decreases the angular velocity by 10%.

The space key will stop the robot moving.

The concept of the linear and angular velocity will be discussed when the Twist message is described. But basically the robot does not contain steerable wheels so a change in direction will be achieved by requesting different speeds and or direction of the two motors. The amount of steering required will be set by the angular velocity.

The up/down/left and right keys are used to move the head/camera when in manual mode.

void RodneyNode::keyboardCallBack(const keyboard::Key::ConstPtr& msg)
{
    // Check for any keys we are interested in 
    // Current keys are:
    //      'Space' - Stop the robot from moving if in manual locomotion mode
    //      'Key pad 1 and Num Lock off' - Move robot forwards and counter-clockwise if in manual locomotion mode
    //      'Key pad 2 and Num Lock off' - Move robot backwards if in manual locomotion mode
    //      'Key pad 3 and Num Lock off' - Move robot backwards and clockwise if in manual locomotion mode 
    //      'Key pad 4 and Num Lock off' - Move robot counter-clockwise if in manual locomotion mode   
    //      'Key pad 6 and Num Lock off' - Move robot clockwise if in manual locomotion mode
    //      'Key pad 7 and Num Lock off' - Move robot forwards amd counter-clockwise if in manual locomotion mode    
    //      'Key pad 8 and Num Lock off' - Move robot foward if in manual locomotion mode
    //      'Key pad 9 and Num Lock off' - Move robot forwards amd clockwise if in manual locomotion mode
    //      'Up key' - Move head/camera down in manual mode
    //      'Down key' - Move head/camera up in manual mode
    //      'Right key' - Move head/camera right in manual mode
    //      'Left key' - Move head/camera left in manual mode 
    //      'Key pad +' - Increase linear speed by 10% (speed when using keyboard for teleop)
    //      'Key pad -' - Decrease linear speed by 10% (speed when using keyboard for teleop)
    //      'Key pad *' - Increase angular speed by 10% (speed when using keyboard for teleop)
    //      'Key pad /' - Decrease angular speed by 10% (speed when using keyboard for teleop)   
    //      '2' - Run mission 2    
    //      'c' or 'C' - Cancel current mission
    //      'd' or 'D' - Move head/camera to the default position in manual mode 
    //      'm' or 'M' - Set locomotion mode to manual        

    // Check for key 2 with no modifiers apart from num lock is allowed
    if((msg->code == keyboard::Key::KEY_2) && ((msg->modifiers & ~keyboard::Key::MODIFIER_NUM) == 0))
    {
        // '2', start a complete scan looking for faces (mission 2)
        std_msgs::String mission_msg;
        mission_msg.data = "M2";
        mission_pub_.publish(mission_msg);
                    
        mission_running_ = true; 
        manual_locomotion_mode_ = false;
        
        last_interaction_time_ = ros::Time::now();       
    }
    else if((msg->code == keyboard::Key::KEY_c) && ((msg->modifiers & ~RodneyNode::SHIFT_CAPS_NUM_LOCK_) == 0))
    {          
        // 'c' or 'C', cancel mission if one is running
        if(mission_running_ == true)
        {
            std_msgs::Empty empty_msg;
            cancel_pub_.publish(empty_msg);
        }
        
        last_interaction_time_ = ros::Time::now();        
    }
    else if((msg->code == keyboard::Key::KEY_d) && ((msg->modifiers & ~RodneyNode::SHIFT_CAPS_NUM_LOCK_) == 0))
    {          
        // 'd' or 'D', Move camera to default position
        if(manual_locomotion_mode_ == true)
        {            
            std_msgs::String mission_msg;
            mission_msg.data = "J3^c^-";
            mission_pub_.publish(mission_msg);
        }    
        
        last_interaction_time_ = ros::Time::now();   
    }       
    else if((msg->code == keyboard::Key::KEY_m) && ((msg->modifiers & ~RodneyNode::SHIFT_CAPS_NUM_LOCK_) == 0))
    {
        // 'm' or 'M', set locomotion mode to manual (any missions going to auto should set manual_locomotion_mode_ to false)
        // When in manual mode user can teleop Rodney with keyboard or joystick
        if(mission_running_ == true)
        {
            // Cancel the ongoing mission
            std_msgs::Empty empty_msg;
            cancel_pub_.publish(empty_msg);                        
        }
        
        // Reset speeds to zero           
        keyboard_linear_speed_ = 0.0f; 
        keyboard_angular_speed_ = 0.0f;
        
        manual_locomotion_mode_ = true;
        
        last_interaction_time_ = ros::Time::now();
    }             
    else if((msg->code == keyboard::Key::KEY_KP1) && ((msg->modifiers & keyboard::Key::MODIFIER_NUM) == 0))
    {
        // Key 1 on keypad without num lock
        // If in manual locomotion mode this is an indication to move backwards and counter-clockwise by the current set speeds
        if(manual_locomotion_mode_ == true)
        {
            keyboard_linear_speed_ = -linear_set_speed_;                        
            keyboard_angular_speed_ = -angular_set_speed_;        
        }
        
        last_interaction_time_ = ros::Time::now();
    }
    else if((msg->code == keyboard::Key::KEY_KP2) && ((msg->modifiers & keyboard::Key::MODIFIER_NUM) == 0))
    {
        // Key 2 on keypad without num lock
        // If in manual locomotion mode this is an indication to move backwards by the current linear set speed
        if(manual_locomotion_mode_ == true)
        {
            keyboard_linear_speed_ = -linear_set_speed_;        
            keyboard_angular_speed_ = 0.0f;            
        }
        
        last_interaction_time_ = ros::Time::now();
    }  
    else if((msg->code == keyboard::Key::KEY_KP3) && ((msg->modifiers & keyboard::Key::MODIFIER_NUM) == 0))
    {
        // Key 3 on keypad without num lock
        // If in manual locomotion mode this is an indication to move backwards and clockwise by the current set speeds
        if(manual_locomotion_mode_ == true)
        {
            keyboard_linear_speed_ = -linear_set_speed_;
            keyboard_angular_speed_ = angular_set_speed_;                    
        }
        
        last_interaction_time_ = ros::Time::now();
    }
    else if((msg->code == keyboard::Key::KEY_KP4) && ((msg->modifiers & keyboard::Key::MODIFIER_NUM) == 0))
    {
        // Key 4 on keypad without num lock
        // If in manual locomotion mode this is an indication to turn counter-clockwise (spin on spot) by the current angular set speed
        if(manual_locomotion_mode_ == true)
        {
            keyboard_linear_speed_ = 0.0f;
            keyboard_angular_speed_ = angular_set_speed_;                    
        }
        
        last_interaction_time_ = ros::Time::now();
    } 
    else if((msg->code == keyboard::Key::KEY_KP6) && ((msg->modifiers & keyboard::Key::MODIFIER_NUM) == 0))
    {
        // Key 6 on keypad without num lock
        // If in manual locomotion mode this is an indication to turn clockwise (spin on spot) by the current angular set speed
        if(manual_locomotion_mode_ == true)
        {
            keyboard_linear_speed_ = 0.0f;  
            keyboard_angular_speed_ = -angular_set_speed_;                  
        }
        
        last_interaction_time_ = ros::Time::now();
    }
    else if((msg->code == keyboard::Key::KEY_KP7) && ((msg->modifiers & keyboard::Key::MODIFIER_NUM) == 0))
    {
        // Key 7 on keypad without num lock
        // If in manual locomotion mode this is an indication to move forwards and counter-clockwise by the current set speeds
        if(manual_locomotion_mode_ == true)
        {
            keyboard_linear_speed_ = linear_set_speed_; 
            keyboard_angular_speed_ = angular_set_speed_;                   
        }
        
        last_interaction_time_ = ros::Time::now();
    }    
    else if((msg->code == keyboard::Key::KEY_KP8) && ((msg->modifiers & keyboard::Key::MODIFIER_NUM) == 0))
    {
        // Key 8 on keypad without num lock
        // If in manual locomotion mode this is an indication to move forward by the current linear set speed
        if(manual_locomotion_mode_ == true)
        {
            keyboard_linear_speed_ = linear_set_speed_; 
            keyboard_angular_speed_ = 0.0f;                   
        }
        
        last_interaction_time_ = ros::Time::now();
    }
    else if((msg->code == keyboard::Key::KEY_KP9) && ((msg->modifiers & keyboard::Key::MODIFIER_NUM) == 0))
    {
        // Key 9 on keypad without num lock
        // If in manual locomotion mode this is an indication to move forwards and clockwise by the current set speeds
        if(manual_locomotion_mode_ == true)
        {
            keyboard_linear_speed_ = linear_set_speed_; 
            keyboard_angular_speed_ = -angular_set_speed_;                   
        }
        
        last_interaction_time_ = ros::Time::now();
    }
    else if(msg->code == keyboard::Key::KEY_SPACE)
    {
        // Space key
        // If in manual locomotion stop the robot movment 
        if(manual_locomotion_mode_ == true)
        {
            keyboard_linear_speed_= 0.0f;     
            keyboard_angular_speed_ = 0.0f;               
        }
        
        last_interaction_time_ = ros::Time::now();
    }
    else if(msg->code == keyboard::Key::KEY_KP_PLUS)
    {
        // '+' key on num pad
        // If in manual locomotion increase linear speed by 10%
        if(manual_locomotion_mode_ == true)
        {
            linear_set_speed_ += ((10.0/100.0) * linear_set_speed_);
            ROS_INFO("Linear Speed now %f", linear_set_speed_);
        }  
        
        last_interaction_time_ = ros::Time::now();  
    }
    else if(msg->code == keyboard::Key::KEY_KP_MINUS)
    {
        // '-' key on num pad
        // If in manual locomotion decrease linear speed by 10%
        if(manual_locomotion_mode_ == true)
        {
            linear_set_speed_ -= ((10.0/100.0) * linear_set_speed_);
            ROS_INFO("Linear Speed now %f", linear_set_speed_);
        }  
        
        last_interaction_time_ = ros::Time::now();      
    }
    else if(msg->code == keyboard::Key::KEY_KP_MULTIPLY)
    {
        // '*' key on num pad
        // If in manual locomotion increase angular speed by 10%
        if(manual_locomotion_mode_ == true)
        {
            angular_set_speed_ += ((10.0/100.0) * angular_set_speed_);
            ROS_INFO("Angular Speed now %f", angular_set_speed_);
        }    
        
        last_interaction_time_ = ros::Time::now();
    }
    else if(msg->code == keyboard::Key::KEY_KP_DIVIDE)
    {
        // '/' key on num pad        
        // If in manual locomotion decrease angular speed by 10%
        if(manual_locomotion_mode_ == true)
        {
            angular_set_speed_ -= ((10.0/100.0) * angular_set_speed_);
            ROS_INFO("Angular Speed now %f", angular_set_speed_);
        }   
        
        last_interaction_time_ = ros::Time::now(); 
    }    
    else if(msg->code == keyboard::Key::KEY_UP)
    {
        // Up Key
        // This is a simple job not a mission - move the head/camera down
        if(manual_locomotion_mode_ == true)
        {            
            std_msgs::String mission_msg;
            mission_msg.data = "J3^d^-";
            mission_pub_.publish(mission_msg);
        }
        
        last_interaction_time_ = ros::Time::now();
    }
    else if(msg->code == keyboard::Key::KEY_DOWN)
    {
        // Down Key
        // This is a simple job not a mission - move the head/camera up
        if(manual_locomotion_mode_ == true)
        {
            std_msgs::String mission_msg;
            mission_msg.data = "J3^u^-";
            mission_pub_.publish(mission_msg);
        }
        
        last_interaction_time_ = ros::Time::now();
    }  
    else if(msg->code == keyboard::Key::KEY_LEFT)
    {
        // Left key
        // This is a simple job not a mission - move the head/camera left
        if(manual_locomotion_mode_ == true)
        {
            std_msgs::String mission_msg;
            mission_msg.data = "J3^-^l";
            mission_pub_.publish(mission_msg);
        }
        
        last_interaction_time_ = ros::Time::now();
    }       
    else if(msg->code == keyboard::Key::KEY_RIGHT)
    {
        // Right Key
        // This is a simple job not a mission - move the head/camera right
        if(manual_locomotion_mode_ == true)
        {
            std_msgs::String mission_msg;
            mission_msg.data = "J3^-^r";
            mission_pub_.publish(mission_msg);
        }
        
        last_interaction_time_ = ros::Time::now();
    }                             
    else
    {
        ;
    } 
}

The batteryCallback function is called when a messaged is received on the main_battery_status topic. This topic is of message type sensor_msgs/BatteryState which contains numerous battery information. For now we are just interested in the battery voltage level.

The callback will publish a message which contains an indication of a good or bad level along with the battery voltage level. This is published on the /robot_face/expected_input topic so will be displayed below the robot’s animated face.

The level at which the battery is considered low is configurable by using the parameter server. If the voltage is below this value, as well as the warning below the animated face a request will be sent every 5 minutes requesting that the robot speaks a low battery warning. This request will be sent to the rodney_mission_node with an ID of "J2". The first parameter is the text to speak and the second parameter is the text that the animated face should use for its display. This includes the ":(" smiley so that the robot face looks sad.

// Callback for main battery status
void RodneyNode::batteryCallback(const sensor_msgs::BatteryState::ConstPtr& msg)
{ 
    // Convert float to string with two decimal places
    std::stringstream ss;
    ss << std::fixed << std::setprecision(2) << msg->voltage;
    std::string voltage = ss.str();
    
    std_msgs::String status_msg;
    
    // Publish battery voltage to the robot face
    // However the '.' will be used by the face to change the expression to neutral so we will replace with ','
    replace(voltage.begin(), voltage.end(), '.', ',');
    
    if(msg->voltage > voltage_level_warning_)
    {
        status_msg.data = "Battery level OK ";
        battery_low_count_ = 0;
    }
    else
    {
        // If the battery level goes low we wait a number of messages to confirm it was not a dip as the motors started
        if(battery_low_count_ > 1)
        {
        
            status_msg.data = "Battery level LOW ";
        
            // Speak warning every 5 minutes        
            if((ros::Time::now() - last_battery_warn_).toSec() > (5.0*60.0))
            {
                last_battery_warn_ = ros::Time::now();
            
                std_msgs::String mission_msg;
                mission_msg.data = "J2^battery level low^Battery level low:(";
                mission_pub_.publish(mission_msg);
            }
        }
        else
        {
            battery_low_count_++;
        }
    }
    
    status_msg.data += voltage + "V";                                 
    face_status_pub_.publish(status_msg);
}

The completeCallBack function is called when a messaged is received on the /missions/mission_complete topic. An indication that the robot is no longer running a mission is set by setting missions_running_ to false.

void RodneyNode::completeCallBack(const std_msgs::String::ConstPtr& msg)
{
    mission_running_ = false;
    
    last_interaction_time_ = ros::Time::now();
}

The motorDemandCallBack function is called when a message is received on the demand_vel topic.

The robot movements will be either manual or autonomous, this node is responsible for using either the demands created from the keyboard or joystick in manual mode, or from the autonomous subsystem. This callback simply stores the linear and angular demands from the autonomous subsystem.

// Callback for when motor demands received in autonomous mode
void RodneyNode::motorDemandCallBack(const geometry_msgs::Twist::ConstPtr& msg)
{ 
    linear_mission_demand_ = msg->linear.x;
    angular_mission_demand_ = msg->angular.z;
}

The sendTwist function is one of those called from main in our loop. It decides which input should be used for requesting the actual electric motor demands, either joystick, keyboard or the autonomous subsystem. The chosen demands are published in a message on the cmd_vel topic. Notice that a demand is always published as its normal practice for the system to keep up a constant rate of demands. If the demands are not sent then the part of the system controlling the motors can shut them down as a safety precaution. 

The message is of type geometry_msgs/Twist and contains two vectors, one for linear velocity (meters/second) and one for angular velocity (radians/second). Each vector gives the velocities in three dimensions, now for linear we will only use the x direction and for angular only the velocity around the z direction. This may seem like overkill but it does mean that we can make use of existing path planning and obstacle avoidance software later in the project. Publishing this topic also means that we can simulate our robot movements in Gazebo. Gazebo is a robot simulation tool which we will use later in this part of the article to test some of our code.

To ramp the velocities to the target demands the callback function make use of two helper functions rampedTwist and rampedVel. We use these to ramp to the target velocities in order to stop skidding and shuddering which may occur if we attempted to move the robot in one big step change in velocity. The code in these two helper functions is based on Python code from the O’Reilly book "Programming Robots with ROS".

void RodneyNode::sendTwist(void)
{
    geometry_msgs::Twist target_twist;
    
    // If in manual locomotion mode use keyboard or joystick data
    if(manual_locomotion_mode_ == true)
    {
        // Publish message based on keyboard or joystick speeds
        if((keyboard_linear_speed_ == 0) && (keyboard_angular_speed_ == 0))
        {
            // Use joystick values
            target_twist.linear.x = joystick_linear_speed_;
            target_twist.angular.z = joystick_angular_speed_;            
        }
        else
        {
            // use keyboard values
            target_twist.linear.x = keyboard_linear_speed_;
            target_twist.angular.z = keyboard_angular_speed_;                   
        }
    }
    else
    {
        // Use mission demands (autonomous)
        target_twist.linear.x = linear_mission_demand_;
        target_twist.angular.z = angular_mission_demand_;
    }
    
    ros::Time time_now = ros::Time::now();
        
    // Ramp towards are required twist velocities
    last_twist_ = rampedTwist(last_twist_, target_twist, last_twist_send_time_, time_now);
        
    last_twist_send_time_ = time_now;
        
    // Publish the Twist message
    twist_pub_.publish(last_twist_);
}
//---------------------------------------------------------------------------

geometry_msgs::Twist RodneyNode::rampedTwist(geometry_msgs::Twist prev, geometry_msgs::Twist target,
                                             ros::Time time_prev, ros::Time time_now)
{
    // Ramp the angular and linear values towards the tartget values
    geometry_msgs::Twist retVal;
    
    retVal.angular.z = rampedVel(prev.angular.z, target.angular.z, time_prev, time_now, ramp_for_angular_);
    retVal.linear.x = rampedVel(prev.linear.x, target.linear.x, time_prev, time_now, ramp_for_linear_);
    
    return retVal;
}
//---------------------------------------------------------------------------

float RodneyNode::rampedVel(float velocity_prev, float velocity_target, ros::Time time_prev, ros::Time time_now,
                            float ramp_rate)
{
    // Either move towards the velocity target or if difference is small jump to it
    float retVal;    
    float sign;
    float step = ramp_rate * (time_now - time_prev).toSec();
    
    if(velocity_target > velocity_prev)
    {
        sign = 1.0f;
    }
    else
    {
        sign = -1.0f;
    }
    
    float error = std::abs(velocity_target - velocity_prev);
    
    if(error < step)
    {
        // Can get to target in this within this time step
        retVal = velocity_target;
    }
    else
    {
        // Move towards our target
        retVal = velocity_prev + (sign * step);
    }        
    
    return retVal;
}

The last function checkTimers is the other function called from main in our loop. Now the functionality here serves two purposes. The first is if the robot is inactive, that is that it has not been manually controlled or it finished the last mission more than 15 minutes ago, it will play a pre-existing wav file to remind you that it is still powered up. This functionality can be disabled by use of the /sounds/enabled parameter in the parameter server.

Oh and the second purpose of the functionality I’m afraid is an indication of my sense of humour, all my pre-existing wav files are recordings of Sci-Fi robots. I figured if a robot got bored it may amuse its self by doing robot impressions! "Danger Will Robinson, danger". Anyway if you don’t like this idea you can disable the functionality or just play something else to show it is still powered up and inactive.

There are a number of wav file names and text sentences to go with the wav files loaded into the parameter server. When it is time to play a wav file a random number is generated to select which wav file to play. The request is then sent using the ID "J1".

void RodneyNode::checkTimers(void)
{
    /* Check time since last interaction */
    if((wav_play_enabled_ == true) && (mission_running_ == false) && ((ros::Time::now() - last_interaction_time_).toSec() > (15.0*60.0)))
    {
        last_interaction_time_ = ros::Time::now();
        
        // Use a random number to pick the wav file
        int random = (rand()%wav_file_names_.size())+1;                
         
        // This is a simple job not a mission
        std_msgs::String mission_msg;
        std::string path = ros::package::getPath("rodney");
        mission_msg.data = "J1^" + path + "/sounds/" + wav_file_names_[std::to_string(random)] + 
                           "^" + wav_file_texts_[std::to_string(random)];        
        mission_pub_.publish(mission_msg);         
    }
}

Changes to head_control node

In part 2 of these articles we wrote the head_control package to synchronise the head movement and facial recognition functionality. In this article we want to be able to control the head manually as well. We therefore need to make some modifications to the head_control_node.cpp file.

In the HeadControlNode constructor add code to subscribe to the /head_control_node/manual topic.

// Subscribe to topic for manual head movement command
manual_sub_ = nh_.subscribe("/head_control_node/manual", 5,&HeadControlNode::manualMovementCallback, this);

Also add the following line to the end of the constructor.

target_pan_tilt_ = current_pan_tilt_;

Add code for the manualMovementCallback function which is called when a message is received on the /head_control_node/manual topic. This function processes the request to move the head/camera up, down, left, right or to the default position.

// This callback is used to process a command to manually move the head/camera
void HeadControlNode::manualMovementCallback(const std_msgs::String& msg)
{   
    if(msg.data.find('u') != std::string::npos)
    {
        target_pan_tilt_.tilt = current_pan_tilt_.tilt + tilt_view_step_;        
		
	    if(target_pan_tilt_.tilt > tilt_max_)
	    {          
	        // Moved out of range, put back on max                   
	        target_pan_tilt_.tilt = tilt_max_;
        }
    }
    
    if(msg.data.find('d') != std::string::npos)
    {
        target_pan_tilt_.tilt = current_pan_tilt_.tilt - tilt_view_step_;
        
        if(target_pan_tilt_.tilt < tilt_min_)
	    {          
	        // Moved out of range, put back on min                   
	        target_pan_tilt_.tilt = tilt_min_;
        }
    }
    
    if(msg.data.find('l') != std::string::npos)
    {
        target_pan_tilt_.pan = current_pan_tilt_.pan + pan_view_step_;
        
        if(target_pan_tilt_.pan > pan_max_)
	    {          
	        // Moved out of range, put back on max                   
	        target_pan_tilt_.pan = pan_max_;
        }
    }
    
    if(msg.data.find('r') != std::string::npos)
    {
        target_pan_tilt_.pan = current_pan_tilt_.pan - pan_view_step_;

        if(target_pan_tilt_.pan < pan_min_)
	    {          
	        // Moved out of range, put back on min                   
	        target_pan_tilt_.pan = pan_min_;
        }
    }
    
    if(msg.data.find('c') != std::string::npos)
    {
        // Move to default central position
        target_pan_tilt_ = default_position_;
    }
    
    // Assume if message received we will be moving the head/camera
    move_head_ = true;
    process_when_moved_ = nothing;        
}

Joystick node

Now throughout this article we have added functionality for the robot to be moved manually by using a joystick/game pad controller. There is a joystick node available on the ROS Wiki website called joy.

However I tried this package on two different Linux PCs and found that I kept getting segmentation faults. Instead of doing any deep investigation to see what the problem was I wrote my own simple joystick node. It’s simpler than the one on the ROS website as I don’t bother with worrying about sticky buttons etc.

I would suggest that you try and use the package from the ROS website but if you have similar problems then you can use my ROS package which is available in the joystick folder. I have used it successfully with a Microsoft Xbox 360 Wired Controller and the joystick_node.cpp file is reproduced below,

// Joystick Node. Takes input from a joystick/game pad and outputs current state in a sensor_msgs/joy topic.
// See https://www.kernel.org/doc/Documentation/input/joystick-api.txt
#include <joystick/joystick_node.h>

#include <fcntl.h>
#include <stdio.h>
#include <linux/joystick.h>

// Constructor 
Joystick::Joystick(ros::NodeHandle n, std::string device)
{
    nh_ = n;
    
    // Advertise the topics we publish
    joy_status_pub_ = nh_.advertise<sensor_msgs::Joy>("joy", 5);
        
    js_ = open(device.c_str(), O_RDONLY);
    
    if (js_ == -1)
    {
        ROS_ERROR("Problem opening joystick device");
    }
    else
    {
        int buttons = getButtonCount();
        int axes = getAxisCount();

        joyMsgs_.buttons.resize(buttons);
        joyMsgs_.axes.resize(axes);
        
        ROS_INFO("Joystick number of buttons %d, number of axis %d", buttons, axes);
    }
}

// Process the joystick input
void Joystick::process(void)
{
    js_event event;
        
    FD_ZERO(&set_);
    FD_SET(js_, &set_);
    
    tv_.tv_sec = 0;
    tv_.tv_usec = 250000;
    
    int selectResult = select(js_+1, &set_, NULL, NULL, &tv_);
    
    if(selectResult == -1)
    {
        ROS_ERROR("Error with select joystick call"); // Error
    }
    else if (selectResult)
    {
        // Data available
        if(read(js_, &event, sizeof(js_event)) == -1 && errno != EAGAIN)
        {
            // Joystick probably closed
            ;
        }
        else
        {
            switch (event.type)
            {
                case JS_EVENT_BUTTON:
                case JS_EVENT_BUTTON | JS_EVENT_INIT:
                    // Set the button value                    
                    joyMsgs_.buttons[event.number] = (event.value ? 1 : 0);
                    
                    time_last_msg_ = ros::Time::now();
                    
                    joyMsgs_.header.stamp = time_last_msg_;
                    
                    // We publish a button press right away so they are not missied
                    joy_status_pub_.publish(joyMsgs_);
                    break;
                    
                case JS_EVENT_AXIS:
                case JS_EVENT_AXIS | JS_EVENT_INIT:
                    // Set the axis value 
                    joyMsgs_.axes[event.number] = event.value;
                    
                    // Only publish if time since last regular message as expired
                    if((ros::Time::now() - time_last_msg_).toSec() > 0.1f)                    
                    {
                        time_last_msg_ = ros::Time::now();
                    
                        joyMsgs_.header.stamp = time_last_msg_;
                        
                        // Time to publish
                        joy_status_pub_.publish(joyMsgs_);                 
                    }

                default:                    
                    break;            
            }
        }  
    }
    else
    {
        // No data available, select time expired.
        // Publish message to keep anything alive that needs it
        
        time_last_msg_ = ros::Time::now();
        
        joyMsgs_.header.stamp = time_last_msg_;
        
        // Publish the message
        joy_status_pub_.publish(joyMsgs_);
    }
}

// Returns the number of buttons on the controller or 0 if there is an error.
int Joystick::getButtonCount(void)
{
    int buttons;
    
    if (ioctl(js_, JSIOCGBUTTONS, &buttons) == -1)
    {
        buttons = 0;
    }

    return buttons;
}

// Returns the number of axes on the controller or 0 if there is an error.
int Joystick::getAxisCount(void)
{
    int axes;

    if (ioctl(js_, JSIOCGAXES, &axes) == -1)
    {
        axes = 0;
    }

    return axes;
}

int main(int argc, char **argv)
{
    std::string device;
    
    ros::init(argc, argv, "joystick_node");

    // ros::init() parses argc and argv looking for the := operator.
    // It then modifies the argc and argv leaving any unrecognized command-line parameters for our code to parse.
    // Use command line parameter to set the device name of the joystick or use a default.        
    if (argc > 1)
    {
        device = argv[1];
    }
    else
    {
        device = "/dev/input/js0";
    }
    
    ros::NodeHandle n;    
    Joystick joystick_node(n, device);   
    std::string node_name = ros::this_node::getName();
	ROS_INFO("%s started", node_name.c_str());	
	
	// We are not going to use ros::Rate here, the class will use select and 
	// return when it's time to spin and send any messages on the topic
    
    while(ros::ok())
    {
        // Check the joystick for an input and process the data
        joystick_node.process();
        
        ros::spinOnce();
    }
    
    return 0;    
}

Using the code

To test the code we have developed so far I’m going to run some tests on the actual robot hardware but we can also run some tests on the Gazebo robot simulator tool running on a Linux PC. In the folder rodney/urdf there is a file called rodney.urdf which models the Rodney Robot. How to write a URDF (Unified Robot Description Format) model would require many articles itself but as always there is information on the ROS Wiki website about URDF. My model is nowhere near perfect and needs some work but we can use it here to test the robot locomotion. All the files to do this are included in the rodney folder and the rodney_sim_control folder. 

Building the ROS packages on the workstation

On the workstation as well as running the simulation we also want to run the keyboard and joystick nodes so that we can control the actual robot hardware remotely.

Create a workspace with the following commands:

$ mkdir -p ~/test_ws/src 
$ cd ~/test_ws/ 
$ catkin_make

Copy the packages rodney, joystick, rodney_sim_control and ros-keyboard (from https://github.com/lrse/ros-keyboard) into the ~/test_ws/src folder and then build the code with the following commands:

$ cd ~/test_ws/ 
$ catkin_make

Check that the build completes without any errors.

Running the simulation

In the rodney_sim_control package there is a launch file that will load the robot model into the parameter server, launch Gazebo and spawn a simulation of the robot. Launch this file with the following commands:

$ cd ~/test_ws/
$ source devel/setup.bash
$ roslaunch rodney_sim_control rodney_sim_control.launch

After a short time you should see the model of Rodney in an empty world. The simulation is currently paused.

In a new terminal load the rodney config file and run the rodney node with the following commands:

$ cd ~/test_ws/ 
$ source devel/setup.bash 
$ rosparam load src/rodney/config/config.yaml
$ rosrun rodney rodney_node

An info message should be seen reported that the node is running.

The first test is going to test that a message on the demand_vel topic, as if from the autonomous subsystem, will control the robot’s movements.

In Gazebo click the play button, bottom left of the main screen, to start the simulation. In a new terminal type the following to send a message on the demand_vel topic.

$ rostopic pub -1 /demand_vel  geometry_msgs/Twist '{linear: {x: 0.5}}'

The simulated robot will move forward at a velocity of 0.5 metres/second. Reverse the direction with the following command:

$ rostopic pub -1 /demand_vel geometry_msgs/Twist '{linear: {x: -0.5}}'

You can stop the robot movement with the following command:

$ rostopic pub -1 /demand_vel geometry_msgs/Twist '{linear: {x: 0.0}}'

Next make the simulated robot turn on the spot with the following command:

$ rostopic pub -1 /demand_vel geometry_msgs/Twist '{angular: {z: 1.0}}'

Repeating the command with a negative value will cause the robot to rotate clockwise and then stop the movement with a value of zero.

Next we will test the movement with the keyboard functionality.

$ cd ~/test_ws/ 
$ source devel/setup.bash
$ rosrun keyboard keyboard

A small window whose title is "ROS keyboard input" should be running. Make sure this window has the focus and then press ‘m’ key to put the robot in manual locomotion mode.

Ensure "num lock" is not selected.

You can now use the keyboards numeric keypad to drive the robot around the simulated world. The following keys can be used to move the robot.

Key pad 8 – forward Key pad 2 – reverse Key pad 4 – rotate anti-clockwise Key pad 6 – rotate clockwise Key pad 7 – forward and left Key pad 9 – forward and right Key pad 1 – reverse and left Key pad 3 – reverse and right Key pad + increase the linear velocity Key pad – decrease the linear velocity Key pad * increase the angular velocity Key pad / decrease the angular velocity

The space bar will stop the robot

Next we can test the movement with the joystick controller. Ensure the robot is stationary. In a new terminal issue the following commands.

$ cd ~/test_ws/
$ source devel/setup.bash
$ rosrun joystick joystick_node

A message showing the node has started should be displayed. With the configuration given in an unchanged rodney/config/config.yaml file and a wired Xbox 360 controller, you can control the simulated robot with the controls shown in the image below.

From the Gazebo menu other objects can be inserted into the world. The video below shows the movement test running using Gazebo. Note that in the video Rodney is a 4 wheel drive robot, I have since updated the model and the actual robot has 2 wheel drive and casters. This will all be explained in the next article when we move the real robot hardware.

[link VIDEO]

Building the ROS packages on the Pi (Robot hardware)

If not already done create a catkin workspace on the Raspberry Pi and initialise it with the following commands:

$ mkdir -p ~/rodney_ws/src
$ cd ~/rodney_ws/
$ catkin_make

Copy the packages face_recognitionface_recognition_msgshead_control, pan_tilt, rondey, rodney_missionsservo_msgs, speech and ros-keyboard (from https://github.com/lrse/ros-keyboard) into the ~/rodney_ws/src folder.

Unless you want to connect the joystick controller directly to the robot you don’t need to build the joystick package on the robot hardware. You do however need to build the keyboard package as it includes a message unique to that package. I’m going to using the Linux PC connected to the same network as the robot to control it remotely.

Build the code with the following commands:

$ cd ~/rodney_ws/ 
$ catkin_make

Check that the build completes without any errors.

You will also need to compile and download the Arduino code to the Nano to control the servos.

If not already done you will need to train the face recognition software, see part 2.

Running the code on the robot

Now we are ready to run our code. With the Arduino connected to a USB port use the launch file to start the nodes with the following commands. If no master node is running in a system the launch command will also launch the master node, roscore:

$ cd ~/rodney_ws/
$ source devel/setup.bash
$ roslaunch rodney rodney.launch

On the workstation run the following commands to start the keyboard node:

$ cd ~/test_ws 
$ source devel/setup.bash 
$ export ROS_MASTER_URI=http://ubiquityrobot:11311 
$ rosrun keyboard keyboard

A small window whose title is "ROS keyboard input" should be running.

The first test we will run on the robot hardware is "Mission 2". Make sure keyboard window has the focus and then press ‘2’ key to start the mission.

The robot should start moving the head/camera scanning the room for known faces. Once it has completed the scan within its head movement range it will either report that no one was recognised or a greeting to those it did recognise.

The next test will check the ability to move the head/camera in manual mode using the keyboard. Make sure keyboard window has the focus and then press ‘m’ to put the system in manual mode. Used the cursor keys to move the head/camera. Press the ‘d’ key to return the head/camera to the default position.

The next test will check the ability to move the head/camera in manual mode using the joystick controller. In a new terminal on the workstation type the following commands.

$ cd ~/test_ws 
$ source devel/setup.bash 
$ export ROS_MASTER_URI=http://ubiquityrobot:11311 
$ rosrun joystick joystick_node

A message showing the node has started should be displayed. With the configuration given in an unchanged rodney/config/config.yaml file and a wired Xbox 360 controller you can control the robot head/camera movement with the controls shown in the image below.

[link VIDEO]

For the next test we will test the status indication. In a terminal at the workstation type the following commands:

$ cd ~/test_ws 
$ source devel/setup.bash 
$ export ROS_MASTER_URI=http://ubiquityrobot:11311 
$ rostopic pub -1 main_battery_status sensor_msgs/BatteryState '{voltage: 12}'

The status below the robot face should read "Battery level OK 12,00V".

In the terminal issue the following command:

$ rostopic pub -1 main_battery_status sensor_msgs/BatteryState '{voltage: 9.4}'

The status below the robot face should read "9,40V".

In the terminal issue the following command twice:

$ rostopic pub -1 main_battery_status sensor_msgs/BatteryState '{voltage: 9.4}'

The status below the robot face should read "Battery level low 9,40V", the robot should speak a low warning and the facial expression should be sad.

Send the message again within 5 minutes of the last message. The warning should not be spoken.

Wait for 5 minutes and send the message again. This time the spoken warning should be repeated.

The next test will check the functionality for wav file playback. Wait for 15 minutes without issuing any commands from the keyboard and joystick. After the 15 minutes the robot should play a random wav file and animate the mouth along with the wav file.

To aid debugging here is an output from rqt_graph of the current system. A full size copy of the image is included in the source zip file.

Points of Interest

In this part of the article we have added code to control the robot actions and brought the code for Design Goal 1 and 2 together to form mission 2.

In the next article we will complete Design Goal 3 by adding motors, a motor controller board and software to drive the board. We will also discuss all the robot hardware required to build Rodney including circuit diagrams and a list of the hardware required. 

History

  • Initial Release: 2018/11/13
Çarşamba, 14 Kasım 2018 / Published in Uncategorized

Hello and welcome :),

So we talked about what Dependency Inversion and Injections are (see here), and last time we looked at how we can make our own IoC container.

I also promised we will start having a look at the mature IoC containers and how they work, so the first one will be the Managed Extensibility Framework (MEF).

I do admit that I am a little biased towards this framework due to a number of reasons and I hope you will understand why once we get into discussing it. Also please note that the sheer amount of information on MEF will turn this into a mini-series only about how to work with it. Otherwise, for this first post, we will have a look at the basics so that we have a basis to compare it against other frameworks.

What is MEF?

MEF is short for Managed Extensibility Framework and it’s been part of the .Net framework since version 4, which for me means quite a bit because besides making it readily available without introducing a lot of 3rd party frameworks (not that there is something inherently wrong with them), it also means that it’s used behind the scenes. I don’t know if you happen to notice but in newer versions of visual studio (I think starting with 2013), when adding plugging visual studio will actually show a progress bar when loading that mentions that the plugins are loaded (and presumably created) with MEF.

One thing to note, doing some research online, MEF is not really in the IoC category of frameworks since it’s real purpose is intended for making applications extendable through plugins. That being said, I still use it as such, and when an application matures, I might even use it for its actual purpose.

So for us to better understand MEF which has a different terminology than other frameworks I encountered, we’re going to make a project based on a metaphor so that it will be easier to follow along both with the reason and the terminology.

Starting a project with MEF

Writing the blueprints

Let’s say you own a car factory that can create any car you wish, but there’s a catch, for your factory to work, it needs to have a blueprint of what you want it to create. So let’s write that first:

namespace BlogPlayground
{
    internal class Car
    {

    }
}

I know, anti-climactic, but let’s start small and expand. Our car, like any other, will need a few essential parts, like for example wheels, so let’s make that a requirement:

namespace BlogPlayground
{
    internal class Car
    {
        private const int WheelCount = 4;

        internal Car(WheelType wheelTypeType)
        {
            WheelType = wheelTypeType;
        }

        internal WheelType WheelType { get; }
    }
}

We’re going to assume that we’re using the same type of wheel for all 4 of them, we will create an additional class for that as well:

namespace BlogPlayground
{
    internal class WheelType
    {
    }
}

On the technical side, the reason this is a class and not an enum (in case some people were asking that), is because we might want to add additional features to the wheels and also it plays nicely into our example :).

So now we would have everything we need to make our factory working at the very basic level. But first, we will write it without MEF and then update it.

Building the factory

Here’s how it would look without MEF:

namespace BlogPlayground
{
    internal class Factory
    {
        internal Factory()
        {

        }

        internal Car CreateCar()
        {
            return new Car(new WheelType());
        }
    }
}

Now that we written our factory, how about testing it out so we know we’re on the right track. As always, we will be using NUnit for this task:

namespace BlogPlayground
{
    using NUnit.Framework;

    [TestFixture]
    public class FactoryTests
    {
        [Test]
        public void CarShouldHaveWheels()
        {
            Factory sut = new Factory();

            Car car = sut.CreateCar();

            Assert.That(car, Is.Not.Null, "car instance was not returned from the factory");
            Assert.That(car.WheelType, Is.Not.Null, "car instance should have an instance of wheel types");
        }
    }
}

We write the test, we make sure it passes and then we can continue without worrying about making mistakes.

Notice that the creation of a car and of its wheel type are hardcoded, that wouldn’t help us build any kind of car right? So the first order of business is making our container.

For MEF to work we need to add a reference to the System.ComponentModel.Composition assembly, this can be found by adding a new reference and looking in the Assemblies section.

namespace BlogPlayground
{
    using System.ComponentModel.Composition.Hosting;
    using System.Reflection;

    internal class Factory
    {
        private readonly CompositionContainer _container;

        internal Factory()
        {
            AssemblyCatalog catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly());
            _container = new CompositionContainer(catalog);
        }

        internal Car CreateCar()
        {
            return _container.GetExportedValue<Car>();
        }
    }
}

Now let’s look at what we have done here:

  • MEF lives inside the System.ComponentModel.Composition.Hosting namespace, that’s why we added it on line 3.
  • MEF works on the concept of “catalogs”, basically a catalog tells MEF where to look for the blueprints and pieces it needs. The AssemblyCatalog we create on line 12 tells MEF to inspect all the types in the local assembly (of course we could have provided another assembly if we wished, more on that later on).

  • On line 13 we created a CompositionContainer that received a catalog as it’s argument, this is the brains of the factory and we will see how it works on the next point.

  • On line 18 we tell the container that we want to return an object of type Car, this will make the container look through its catalog and find a blueprint for that type and the parts for it and will create a car for us.

Though if we were to run our test right now, we would get following error:

No exports were found that match the constraint: \ ContractName BlogPlayground.Car\ RequiredTypeIdentity BlogPlayground.Car

Well, at least we now confirmed that the container is doing its job and tried to look up an object of type BlogPlayground.Car. We will need to help it out in finding that contract. (blueprint in our analogy)

namespace BlogPlayground
{
    using System.ComponentModel.Composition;

    [Export]
    internal class Car
    {
        private const int WheelCount = 4;

        internal Car(WheelType wheelTypeType)
        {
            WheelType = wheelTypeType;
        }

        internal WheelType WheelType { get; }
    }
}

So now we added the using statement for MEF and also added an attribute called [Export] on line 5 that will tell MEF that this object is exposed in the catalog for creation.

Now if we were to run our test we would get the following error:

System.ComponentModel.Composition.CompositionException : The composition produced a single composition error. The root cause is provided below. Review the CompositionException.Errors property for more detailed information.

1) Cannot create an instance of type ‘BlogPlayground.Car’ because a constructor could not be selected for construction. Ensure that the type either has a default constructor, or a single constructor marked with the ‘System.ComponentModel.Composition.ImportingConstructorAttribute’.

Resulting in: Cannot activate part ‘BlogPlayground.Car’.\ Element: BlogPlayground.Car –> BlogPlayground.Car –> AssemblyCatalog (Assembly=”BlogPlayground, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null”)

Now it fails because MEF required that when we create a type that doesn’t have a default or parameterless constructor, then the constructor should be marked as [ImportingConstructor] and because the constructor requires some additional parts, then we will need to mark the parameters with [Import] as well. So let’s do as the error tells us and fix that as well:

namespace BlogPlayground
{
    using System.ComponentModel.Composition;

    [Export]
    internal class Car
    {
        private const int WheelCount = 4;

        [ImportingConstructor]
        internal Car([Import]WheelType wheelTypeType)
        {
            WheelType = wheelTypeType;
        }

        internal WheelType WheelType { get; }
    }
}

Though if we try to run the test now, it will give us the exact same error as before about not finding a contract for the car. This is where MEF admittedly is a little annoying, what it should have told us now is that it cannot find a contract for the WheelType, so to fix this error we’re going to update that class as well:

namespace BlogPlayground
{
    using System.ComponentModel.Composition;

    [Export]
    internal class WheelType
    {
    }
}

Now if we were to run our test it would pass. But why the hassle of doing all of this when we could have just solved it with the hardcoded line? And I know it’s not all that different either since we are working with concrete classes. Well, you are right if you asked that, but let’s see now how we can truly tap into the power of MEF.

The magic of MEF

First off let’s make the Car class into an abstract because we would like to work with many different models:

namespace BlogPlayground
{
    internal abstract class Car
    {
        private const int WheelCount = 4;

        internal Car(WheelType wheelTypeType)
        {
            WheelType = wheelTypeType;
        }

        internal WheelType WheelType { get; }
    }
}

Since we can’t instantiate an abstract class, we removed the attributes for MEF. Next, we will create a sports car:

namespace BlogPlayground
{
    using System.ComponentModel.Composition;

    [Export]
    class SportCar : Car
    {
        [ImportingConstructor]
        public SportCar([Import]WheelType wheelTypeType)
            : base(wheelTypeType)
        {
        }
    }
}

All well and good, but this won’t work because we want to create a Car but now we have a blueprint for a SportCar. To make it work, we need to tell MEF that this will be exported as a Car as well, to do that we just specify the type in the attribute like so:

namespace BlogPlayground
{
    using System.ComponentModel.Composition;

    [Export(typeof(Car))]
    internal class SportCar : Car
    {
        [ImportingConstructor]
        internal SportCar([Import]WheelType wheelTypeType)
            : base(wheelTypeType)
        {
        }
    }
}

But just to make sure, we would want to add another test to make sure:

[Test]
public void CarShouldBeASportCar()
{
    Factory sut = new Factory();

    Car car = sut.CreateCar();

    Assert.That(car, Is.Not.Null, "car instance was not returned from the factory");
    Assert.That(car, Is.TypeOf<SportCar>(), "the instance was not of type SportsCar");
    Assert.That(car.WheelType, Is.Not.Null, "car instance should have an instance of wheel types");
}

We write this test and it passes on the first try, and we made no change to the factory as well. For such a small code base this doesn’t seem that impressive, but consider using this at the level of services and large applications, the power to change a whole behavior just from an attribute.

Let’s see about adding an engine to the Car but this time we will be using interfaces

Creating the engine

First off we will create an interface for our Engine. We also want to export anything of this type without declaring it explicitly for export. The interface will look like this:

namespace BlogPlayground
{
    using System.ComponentModel.Composition;

    [InheritedExport]
    internal interface IEngine
    {
    }
}

The [InheritedExport] attribute will export anything that inherits this class as an IEngine.

Now let’s update our Car with the new prerequisite:

namespace BlogPlayground
{
    internal abstract class Car
    {
        private const int WheelCount = 4;

        internal Car(WheelType wheelTypeType, IEngine engine)
        {
            WheelType = wheelTypeType;
            Engine = engine;
        }

        internal WheelType WheelType { get; }

        internal IEngine Engine { get; }
    }
}

And since this is mandatory, we will need to update the SportCar as well:

namespace BlogPlayground
{
    using System.ComponentModel.Composition;

    [Export(typeof(Car))]
    internal class SportCar : Car
    {
        [ImportingConstructor]
        internal SportCar([Import]WheelType wheelTypeType, [Import]IEngine engine)
            : base(wheelTypeType, engine)
        {
        }
    }
}

Since the Engine is just an interface, we will also need to implement it, nothing fancy:

namespace BlogPlayground
{
    class SportEngine : IEngine
    {
    }
}

Notice that as soon as this class was added all the tests are passing again. Do note that an import and an export can only match one to one so if we were to add another engine we would get an error telling us there are more Engines that the container doesn’t know what to do with them.

Though here’s a good spot to show where MEF outshines the competition if you will, and since a Car can’t have several engines, we’re going to move to the features section, so let’s add some features to our car.

Adding features

First, let’s create an interface for the features:

namespace BlogPlayground
{
    using System.ComponentModel.Composition;

    [InheritedExport]
    public interface IFeature
    {

    }
}

Here we are going to do the same thing with the [InheritedExport] attribute so that we can extend the feature list easily in the future, so let’s create a few features for starters:

namespace BlogPlayground
{
    using System.ComponentModel.Composition;

    [InheritedExport]
    public interface IFeature
    {

    }

    class USB : IFeature
    {
    }

    class GPS : IFeature
    {
    }

    class Radio : IFeature
    {
    }
}

So let’s update our car to accommodate these features:

namespace BlogPlayground
{
    using System.Collections.Generic;

    internal abstract class Car
    {
        private const int WheelCount = 4;

        internal Car(WheelType wheelTypeType, IEngine engine, IEnumerable<IFeature> features)
        {
            WheelType = wheelTypeType;
            Engine = engine;
            Features = features;
        }

        internal WheelType WheelType { get; }

        internal IEngine Engine { get; }

        public IEnumerable<IFeature> Features { get; }
    }
}

And now for the implementation, watch the parameters carefully, the [Import] parameters can be satisfied by a single [Export], though an [ImportMany] parameter can be satisfied with more than one [Export]:

namespace BlogPlayground
{
    using System.Collections.Generic;
    using System.ComponentModel.Composition;

    [Export(typeof(Car))]
    internal class SportCar : Car
    {
        [ImportingConstructor]
        internal SportCar([Import]WheelType wheelTypeType, [Import]IEngine engine, [ImportMany] IEnumerable<IFeature> features)
            : base(wheelTypeType, engine, features)
        {
        }
    }
}

Again the tests have passed but let us make sure that they are all there, let’s add another test:

[Test]
public void CarShouldHaveThreeFeatures()
{
    Factory sut = new Factory();

    Car car = sut.CreateCar();

    Assert.That(car, Is.Not.Null, "car instance was not returned from the factory");
    Assert.That(car.Features, Is.Not.Null, "car instance should have a collection of features");
    Assert.That(car.Features, Has.Length.EqualTo(3), "the car should have 3 features");
    Assert.That(car.Features.ElementAt(0), Is.TypeOf<USB>());
    Assert.That(car.Features.ElementAt(1), Is.TypeOf<GPS>());
    Assert.That(car.Features.ElementAt(2), Is.TypeOf<Radio>());
}

As we ran this test, we will see that it passed as well. We have now created a way of adding features to our cars without modifying the car, all we need to do is create another feature and implement the IFeature interface.

This is just one of the many features that MEF provides, besides the fact that it’s also extensible, I want to show you one more thing before ending this post since MEF has a lot more to cover than just this, these are just the basics.

Multiple cars?

Let’s say you like your new car so much you want to create one for your friends as well, let’s make an example of that through a new test:

[Test]
public void ShouldBeAbleToCreateMultipleCars()
{
    Factory sut = new Factory();

    Car car1 = sut.CreateCar();
    Car car2 = sut.CreateCar();

    Assert.That(car1, Is.Not.Null, "car instance was not returned from the factory");
    Assert.That(car2, Is.Not.Null, "car instance was not returned from the factory");
    Assert.That(car1, Is.Not.EqualTo(car2), "the two instances should be different");
}

If we run this test it will fail, and mostly because MEF and Dependency Injection, in general, is mostly thought of as having reusable swappable parts, as such when we call the factory to create a second instance, it will return the same instance, but we can change that, all we need to do is the following:

namespace BlogPlayground
{
    using System.Collections.Generic;
    using System.ComponentModel.Composition;

    [Export(typeof(Car))]
    [PartCreationPolicy(CreationPolicy.NonShared)]
    internal class SportCar : Car
    {
        [ImportingConstructor]
        internal SportCar([Import]WheelType wheelTypeType, [Import]IEngine engine, [ImportMany] IEnumerable<IFeature> features)
            : base(wheelTypeType, engine, features)
        {
        }
    }
}

By adding the [PartCreationPolicy] attribute with a CreationPolicy.NonShared, we tell MEF that whenever it creates this part it should create a new instance every time, considering back to our analogy,the axel between two wheels or the frame of the car should be shared, though each wheel should not be shared since we will have 4 of them.

The [PartCreationPolicy] can be applied to both [Import] and [Export] attributes and it can have 3 values and those are Shared, NotShared and Any (by default, if not specified it will be treated as Any) so the combinations between them need to match and the way they can match is as follows:

Import Export Instance
Shared Shared Single Instance
Shared NonShared No Match
Shared Any Single Instance
NonShared Shared No Match
NonShared NonShared Separate Instance
NonShared Any Separate Intance
Any Shared Single Instance
Any NonShared Separate Instance
Any Any Single Instance

Conclusion

I hope you enjoyed the small part of MEF that was presented here, in the future we will look at other nifty features like (but not limited to) metadata, assembly discovery, function exports (yes we can export even just strings and functions even), contracts and lazy initialization, custom exports and these come just out of the box without extending MEF at all.

Here are a few example of how I (and others in the teams I worked in/with) have used MEF in the past and I’m curious what you come up with as well:

  • Making desktop application modules in different assemblies that would load when booting up.
  • Making applications that can download their modules in real time from the server and update without restarting the application.

  • Using it for enabling and disabling access to modules in an application based on roles or other criteria

  • Making functions that plug into the life cycle of the application without touching the core code.

Please note that this will not be a running series but there will be the promised additions to it, I’m mentioning this since MEF might not be the thing you or others are looking for and it would just delay the presentations of the other frameworks.

Thank you and see you next time :),

Vlad V.

Çarşamba, 14 Kasım 2018 / Published in Uncategorized

Introduction

In this post, I would like to show how we can switch out the default logging pipeline in favor of Serilog which has a lot more providers implemented by the community and also provides a way to log structured data.

The Backstory

For those of you who have created projects in ASP.NET Core 1.1 or earlier, you might remember the Program.cs file looking like this:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;

namespace WebApplication1
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup()
                .UseApplicationInsights()
                .Build();

            host.Run();
        }
    }
}

As you can see, during previous versions of ASP.NET Core, the setup for the entry point of the application used to be more explicit. Now, starting from ASP.NET Core 2.0 and higher, the default Program.cs file looks like this:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;

namespace WebApplication1
{
    public class Program
    {
        public static void Main(string[] args)
        {
            CreateWebHostBuilder(args).Build().Run();
        }

        public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .UseStartup();
    }
}

Though the default builder cleans up the code nicely, it does add some default (as the name implies) configurations that aren’t all that obvious.

If we take a look at what WebHost.CreateDefaultBuilder actually does, we will see the following:

public static IWebHostBuilder CreateDefaultBuilder(string[] args)
{
    var builder = new WebHostBuilder();

    if (string.IsNullOrEmpty(builder.GetSetting(WebHostDefaults.ContentRootKey)))
    {
        builder.UseContentRoot(Directory.GetCurrentDirectory());
    }

    if (args != null)
    {
        builder.UseConfiguration(new ConfigurationBuilder().AddCommandLine(args).Build());
    }

    builder.UseKestrel((builderContext, options) =>
        {
            options.Configure(builderContext.Configuration.GetSection("Kestrel"));
        })
        .ConfigureAppConfiguration((hostingContext, config) =>
        {
            var env = hostingContext.HostingEnvironment;

            config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                  .AddJsonFile($"appsettings.{env.EnvironmentName}.json", 
                                              optional: true, reloadOnChange: true);

            if (env.IsDevelopment())
            {
                var appAssembly = Assembly.Load(new AssemblyName(env.ApplicationName));
                if (appAssembly != null)
                {
                    config.AddUserSecrets(appAssembly, optional: true);
                }
            }

            config.AddEnvironmentVariables();

            if (args != null)
            {
                config.AddCommandLine(args);
            }
        })
        // THIS IS THE PART WE'RE INTERESTED IN. (INTEREST!!!)
        .ConfigureLogging((hostingContext, logging) =>
        {
            logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
            logging.AddConsole();
            logging.AddDebug();
        })
        .ConfigureServices((hostingContext, services) =>
        {
            // Fallback
            services.PostConfigure(options =>
            {
                if (options.AllowedHosts == null || options.AllowedHosts.Count == 0)
                {
                    // "AllowedHosts": "localhost;127.0.0.1;[::1]"
                    var hosts = hostingContext.Configuration["AllowedHosts"]?.Split
                                (new[] { ';' }, StringSplitOptions.RemoveEmptyEntries);
                    // Fall back to "*" to disable.
                    options.AllowedHosts = (hosts?.Length > 0 ? hosts : new[] { "*" });
                }
            });
            // Change notification
            services.AddSingleton<IOptionsChangeTokenSource>(
                new ConfigurationChangeTokenSource(hostingContext.Configuration));

            services.AddTransient();
        })
        .UseIISIntegration()
        .UseDefaultServiceProvider((context, options) =>
        {
            options.ValidateScopes = context.HostingEnvironment.IsDevelopment();
        });

    return builder;
}

Well, that sure is a whole lot of configuration for a start, good thing it’s hidden behind such an easy call like CreateDefaultBuilder.

Now, if we look in the code snippet above (I marked it with INTEREST!!! so it’s easy to find), you will see that by default, the configuration setups so that logging is sent to the console and to the debug channel, we won’t be needing this since we’ll be using a different console and there’s no use in having two providers write to the same console at the same time.

The Changes

So the first change we will make is the following:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .ConfigureLogging(
                (webHostBuilderContext, loggingBuilder) =>
                    {
                        loggingBuilder.ClearProviders();
                    })
            .UseStartup();
}

With this change, we’re clearing out both the console and the debug providers, so essentially now we don’t have any logging set up.

Now we’re going to add the following Nuget packages (note that only two of them are required for this to work, all the other sinks are up to your own choice):

  • Serilog (this is the main package and is required)
  • Serilog.Extensions.Logging (this is used to integrate with the ASP.NET Core pipeline, it will also install Serilog as a dependency)
  • Serilog.Sinks.ColoredConsole (this package adds a colored console out that makes it easier to distinguish between logging levels and messages, also this will install Serilog as a dependency)
  • Serilog.Enrichers.Demystify (this package is in pre-release but it makes it so that long stack traces from exceptions that cover async methods turn into a stack trace that is more developer friendly)

With these packages installed, we’re going to change the Program.cs file again and it will end up looking like this:

namespace WebApplication1
{
    using System;

    using Microsoft.AspNetCore;
    using Microsoft.AspNetCore.Hosting;
    using Microsoft.Extensions.Logging;

    using Serilog;
    using Serilog.Extensions.Logging;

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateWebHostBuilder(args).Build().Run();
        }

        public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .ConfigureLogging(
                    (webHostBuilderContext, loggingBuilder) =>
                        {
                            loggingBuilder.ClearProviders();

                            Serilog.Debugging.SelfLog.Enable(Console.Error); // this outputs 
                              // internal Serilog errors to the console in case something 
                              // breaks with one of the Serilog extensions or the framework itself

                            Serilog.ILogger logger = new LoggerConfiguration()
                                .Enrich.FromLogContext() // this adds more information 
                                    // to the output of the log, like when receiving http requests, 
                                    // it will provide information about the request
                                .Enrich.WithDemystifiedStackTraces() // this will change the 
                                    // stack trace of an exception into a more readable form 
                                    // if it involves async
                                .MinimumLevel.Verbose()   // this gives the minimum level to log, 
                                                          // in production the level would be higher
                                .WriteTo.ColoredConsole() // one of the logger pipeline elements 
                                                          // for writing out the log message
                                .CreateLogger();

                            loggingBuilder.AddProvider(new SerilogLoggerProvider
                                     (logger)); // this adds the serilog provider from the start
                        })
                .UseStartup();
    }
}

Now we have integrated Serilog into the main pipeline for logging used by all the components from ASP.NET Core. Notice that we also have access to the webHostBuilderContext which has a Configuration property which allows us to read from the application configuration so that we can set up a more complex pipeline, and there is also a nuget package that allows Serilog to read from an appsettings.json file.

Optionally, Serilog also allows that a log message carry some additional properties, for that, we would need to change the default outputTemplate from this "{Timestamp:yyyy-MM-dd HH:mm:ss} {Level:u3} {Message}{NewLine}{Exception}" to this "{Timestamp:yyyy-MM-dd HH:mm:ss} {Level} {Properties} {Message}{NewLine}{Exception}"; Notice the Properties template placeholder, this is where serilog will place all additional information that is not in the actual message, like data from the http request. To see how this change would look, see the following:

Serilog.ILogger logger = new LoggerConfiguration()
    .Enrich.FromLogContext()             // this adds more information to the output of the log, 
                                         // like when receiving http requests, it will provide 
                                         // information about the request
    .Enrich.WithDemystifiedStackTraces() // this will change the stack trace of an exception 
                                         // into a more readable form if it involves async
    .MinimumLevel.Verbose()              // this gives the minimum level to log, in production 
                                         // the level would be higher
    .WriteTo.ColoredConsole(outputTemplate: "{Timestamp:yyyy-MM-dd HH:mm:ss} 
             {Level} {Properties} {Message}{NewLine}{Exception}") // one of the logger pipeline 
                                                  // elements for writing out the log message
    .CreateLogger();

Conclusion

Note that there are as many ways to set up a logging pipeline as there are applications, this is just my personal preference.

Also, in case you were wondering why I opted to make the changes inside the Program.cs file instead of the Startup.Configure() method, as some examples show it online, is because I believe that if the default logging is set up in its own dedicated function, this should as well, also this introduces Seriloger earlier in the process than by using the Startup method, which in turn provides more information.

I hope you enjoyed this post and that it will help you better debug and maintain your applications.

Thank you and see you next time. Cheers!

Çarşamba, 14 Kasım 2018 / Published in Uncategorized

Introduction

In this post, we are going to discuss how we can add functionality to an ASP.NET Core application outside of a request.

The code for this post can be found here.

The Story

As some, if not all of you know, web servers usually only work in the context of a request. So when we deploy an ASP.NET Core (or any other web server) and it doesn’t receive a response to a request to the server, then, it will stay insert on the server waiting for a request, be it from a browser or an API endpoint.

But there might be occasions when, depending on the application that is being built, we need to do so some work outside of the context of a request. A list of such possible scenarios goes as follows:

  • Serving notifications to users
  • Scraping currency exchange rates
  • Doing data maintenance and archival
  • Communicating with a non-deterministic external system
  • Processing an approval workflow

Though there are not a whole lot of scenarios in which a web server would do more than just serve responses to requests, otherwise this would be common knowledge, it is useful to know how to embed such behavior in our applications without creating worker applications.

The Setup

The Project

First, let’s create an ASP.NET Core application, in my example, I created a 2.1 MVC Application.

We’re going to use this project to create a background worker as an example.

The Injectable Worker

Though this step is not mandatory for our work, we will create a worker class that will be instantiated via injection so we can test out the worker class and keep it decoupled from the main application.

namespace AspNetBackgroundWorker
{
    using Microsoft.Extensions.Logging;

    public class BackgroundWorker
    {
        private readonly ILogger _logger;

        private int _counter;

        public BackgroundWorker(ILogger logger)
        {
            _counter = 0;
            _logger = logger;
        }

        public void Execute()
        {
            _logger.LogDebug(_counter.ToString());
            _counter++;
        }
    }
}

Notice that for this example, this class doesn’t do much except log out a counter, though the reason we’re using an ILogger is so that we can see it in action with it being created and having dependencies injected.

Registering the Worker in the Inversion of Control Container

Inside the ConfigureServices method from the Startup.cs file, we will introduce the following line:

services.AddSingleton();

It doesn’t need to be a singleton, but it will serve well for our purpose.

The Implementation

Now that we have a testable and injectable worker class created and registered, we will move on to making it run in the background.

For this, we will be going into the Program.cs file and change it to the following:

using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;

namespace AspNetBackgroundWorker
{
    using System;
    using System.Threading;

    using Microsoft.Extensions.DependencyInjection;

    public class Program
    {
        public static void Main(string[] args)
        {
            // We split up the building of the webHost with running it 
            // so that we can do some additional work before the server actually starts
            var webHost = CreateWebHostBuilder(args).Build(); 

            // We create a dedicated background thread that will be running alongside the web server.
            Thread counterBackgroundWorkerThread = new Thread(CounterHandlerAsync) 
            {
                IsBackground = true
            };

            // We start the background thread, providing it with the webHost.Service 
            // so that we can benefit from dependency injection.
            counterBackgroundWorkerThread.Start(webHost.Services); 

            webHost.Run(); // At this point, we're running the server as normal.
        }

        private static void CounterHandlerAsync(object obj)
        {
            // Here we check that the provided parameter is, in fact, an IServiceProvider
            IServiceProvider provider = obj as IServiceProvider 
                                        ?? throw new ArgumentException
            ($"Passed in thread parameter was not of type {nameof(IServiceProvider)}", nameof(obj));

            // Using an infinite loop for this demonstration but it all depends 
            // on the work you want to do.
            while (true)
            {
                // Here we create a new scope for the IServiceProvider 
                // so that we can get already built objects from the Inversion Of Control Container.
                using (IServiceScope scope = provider.CreateScope())
                {
                    // Here we retrieve the singleton instance of the BackgroundWorker.
                    BackgroundWorker backgroundWorker = scope.ServiceProvider.GetRequiredService();

                    // And we execute it, which will log out a number to the console
                    backgroundWorker.Execute();
                }

                // This is only placed here so that the console doesn't get spammed 
                // with too many log lines
                Thread.Sleep(TimeSpan.FromSeconds(1));
            }
        }

        public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .UseStartup();
    }
}

I have provided some inline comments so that it’s easier to follow along.

To test out this code, we need to run the application in console/project mode so that we can follow along on the console window.

Conclusion

Although this example doesn’t do much in the sense of a real-life scenario, it does show us how to make a background thread and run it alongside the web server.

Also, it is not mandatory to run the thread from the Program.cs file, but since this will be a background worker that will do its things forever, I thought it would have been a nice spot. Some other places this could be used at would be:

  • From a middleware
  • From a controller
  • Creating a class that can receive methods and delegates to run ad-hoc and arbitrary methods.

And since we are making use of IServiceProvider, we can use all the registered services at our disposal, not only the ones we registered but also the ones the web server registered, for example Logger, Options, DbContext.

I personally used it in a scenario where a signalR hub would send out periodical notifications to specific users, and that needed to run outside the context of a web request.

I hope you enjoyed this post and found it useful.

Çarşamba, 14 Kasım 2018 / Published in Uncategorized

Introduction

I was working on an app that needed hotkey support and found out technically how to do it, but did not see any very clean solutions, so I wrote my own.  This code provides a dead-simple way to attach/detach a snippet of code to a hotkey in a WPF app.

Background

Hotkeys are a relic from early versions of windows, so we need to use interrop functionality to get to it.  All of this is abstracted in the HotKeyHelper class, however the main trick is to get the relic window handle (hwnd) of the main window of your WPF application.   The hwnd is not available at construction time, so we need to hook an event that occurs at a point where the handle is known.

Using the code

The hotkey code is implemented to be "fire and forget", so you can add the key without having to explicitly remove it, but that is available if needed.   As I mentioned in the background, it is necessary to create the HotKeyHelper at a time when the window has a valid hwnd we can access.  OnSourceInitialized is a good place to do this:

  HotKeyHelper _hotKeys;
  int _throwConfettiKeyId;

  protected override void OnSourceInitialized(EventArgs e)
  {
     base.OnSourceInitialized(e);
     _hotKeys = new HotKeyHelper(this);

    // Assign Ctrl-Alt-C to our ThrowConfetti() method. 
    _throwConfettiKeyId = _hotKeys.ListenForHotKey(
        Key.C,
        HotKeyModifiers.Alt | HotKeyModifiers.Control,
        () => { this.ThrowConfetti(); } // put any code you want here
    );
  }

  // Key removal is handled implicitly, but you can explicitly remove 
  // a key like this
  void DoSomeStuffLater()
  {
      _hotKeys.StopListeningForHotKey(_throwConfettiKeyId);   
  }  

 

Here is the actual code for the helper class:

using System;
using System.Collections.Generic;
using System.Runtime.InteropServices;
using System.Windows;
using System.Windows.Forms;
using System.Windows.Input;
using System.Windows.Interop;

namespace HotKeyTools
{
    /// <summary>
    /// Simpler way to expose key modifiers
    /// </summary>
    [Flags]
    public enum HotKeyModifiers
    {
        None = 0,
        Alt = 1,            // MOD_ALT
        Control = 2,        // MOD_CONTROL
        Shift = 4,          // MOD_SHIFT
        WindowsKey = 8,     // MOD_WIN
    }

    /// <summary>
    /// A helpful interface for abstracting this
    /// </summary>
    public interface IHotKeyTool : IDisposable
    {
        int ListenForHotKey(System.Windows.Input.Key key, HotKeyModifiers modifiers, Action keyAction);
        void StopListeningForHotKey(int id);
    }

    // --------------------------------------------------------------------------
    /// <summary>
    /// A nice generic class to register multiple hotkeys for your app
    /// </summary>
    // --------------------------------------------------------------------------
    public class HotKeyHelper : IHotKeyTool
    {
        // Required interop declarations for working with hotkeys
        [DllImport("user32", SetLastError = true)]
        [return: MarshalAs(UnmanagedType.Bool)]
        protected static extern bool RegisterHotKey(IntPtr hwnd, int id, uint fsModifiers, uint vk);
        [DllImport("user32", SetLastError = true)]
        protected static extern int UnregisterHotKey(IntPtr hwnd, int id);

        protected const int WM_HOTKEY = 0x312;

        /// <summary>
        /// The unique ID to receive hotkey messages
        /// </summary>
        int _idSeed;

        /// <summary>
        /// Handle to the window listening to hotkeys
        /// </summary>
        private IntPtr _windowHandle;

        /// <summary>
        /// Remember what to do with the hot keys
        /// </summary>
        Dictionary<int, Action> _hotKeyActions = new Dictionary<int, Action>();

        // --------------------------------------------------------------------------
        /// <summary>
        /// ctor
        /// </summary>
        // --------------------------------------------------------------------------

        public HotKeyHelper(Window handlerWindow)
        {
            // Create a unique Id seed
            _idSeed = (int)((DateTime.Now.Ticks % 0x60000000) + 0x10000000);

            // Set up the hook to listen for hot keys
            _windowHandle = new WindowInteropHelper(handlerWindow).Handle;
            if(_windowHandle == null)
            {
                throw new ApplicationException("Cannot find window handle.  Try calling this on or after OnSourceInitialized()");
            }
            var source = HwndSource.FromHwnd(_windowHandle);
            source.AddHook(HwndHook);
        }

        // --------------------------------------------------------------------------
        /// <summary>
        /// Listen generally for hotkeys and route to the assigned action
        /// </summary>
        // --------------------------------------------------------------------------
        private IntPtr HwndHook(IntPtr hwnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled)
        {
            if (msg == WM_HOTKEY) 
            {
                var hotkeyId = wParam.ToInt32();
                if (_hotKeyActions.ContainsKey(hotkeyId))
                {
                    _hotKeyActions[hotkeyId]();
                    handled = true;
                }
            }
            return IntPtr.Zero;
        }

        // --------------------------------------------------------------------------
        /// <summary>
        /// Assign a key to a specific action.  Returns an id to allow you to stop
        /// listening to this key.
        /// </summary>
        // --------------------------------------------------------------------------
        public int ListenForHotKey(System.Windows.Input.Key key, HotKeyModifiers modifiers, Action doThis)
        {
            var formsKey = (Keys)KeyInterop.VirtualKeyFromKey(key);

            var hotkeyId = _idSeed++;
            _hotKeyActions[hotkeyId] = doThis;
            RegisterHotKey(_windowHandle, hotkeyId, (uint)modifiers, (uint)formsKey);
            return hotkeyId;
        }

        // --------------------------------------------------------------------------
        /// <summary>
        /// Stop listening for hotkeys. 
        ///     hotkeyId      The id returned from ListenForHotKey
        /// </summary>
        // --------------------------------------------------------------------------
        public void StopListeningForHotKey(int hotkeyId)
        {
            UnregisterHotKey(_windowHandle, hotkeyId);
        }

        // --------------------------------------------------------------------------
        /// <summary>
        /// Dispose - automatically clean up the hotkey assignments
        /// </summary>
        // --------------------------------------------------------------------------
        public void Dispose()
        {
            foreach(var hotkeyId in _hotKeyActions.Keys)
            {
                StopListeningForHotKey(hotkeyId);
            }
        }
    }
}

History

2018/11/13 – initial version

Salı, 13 Kasım 2018 / Published in Uncategorized

Over the years, we’ve learned that sharing the evolution of Visual Studio, with you – our users – early and often helps us to deliver the best possible experience for our community. We’re excited to share today that, as part of the development of Visual Studio 2019, we’ve been looking to refresh our theme, update our product icon and splash screens, and help you get to your code faster. I’d like to walk you through our thinking behind the changes and show off the resulting user experience that you’ll encounter every day. By leaving a comment below or suggesting a feature (or reporting a bug!) in Developer Community, you have a chance to provide input into the design of the product, early in the process.

Updating our product icon

Visual Studio 2017 icon (left) and the new Visual Studio 2019 icon (right)

The first change you might notice is the refresh of our product and preview icons. We work on improving our icons for each release of Visual Studio so that you can quickly spot which version of Visual Studio you’re opening and using. We caught some usability issues around the style of the icon in the early stages of releasing Visual Studio 2017 and we’re focused on addressing these issues for Visual Studio 2019.

One thing that came up was that the current icon’s flat style rendered it almost invisible against a background with a similar color. By adopting the Fluent Design System approach to depth, lighting, and materials, we’ve visually enhanced the icon so that it’s much more visible against a variety of backgrounds.

The new Visual Studio 2019 icon in the taskbar and start menu

We’re always learning about new situations and environments where the Visual Studio logo might appear. We keep improving its legibility, reducing the chance that it will get lost on a similar-colored background.

The Visual Studio 2019 release icon (left) next to the Visual Studio 2019 Preview icon (right)

Another challenge we faced was the difference between a Preview and final RTM version of Visual Studio. Our product icon is the obvious way for us to be able to communicate this difference, but this proved difficult with the Visual Studio 2017 icon set. For Visual Studio 2017, the icon was designed to be a part of the large Visual Studio Family. The method we used was to align all our icons with a consistent “ribbon” down the right side. However, this left less space for the identifying mark that distinguished the apps from one another.

For Visual Studio 2019, we started by removing any extra parts of the icon. We wanted to focus on the most recognizable element of the Visual Studio logo: namely, the infinity loop

We increased the size of the infinity loop, which gave us more room and opportunity to show the difference between the Preview and Release icons. We’ve also taken a bolder approach to how we represent the Preview. By breaking the icon shape in a few places, we’ve maintained the overall shape of the Visual Studio icon. But we’re showing a distinct and accessible difference at the same time, suggesting a complete (if not production-ready) preview.

We’re working on a similar approach for the new Visual Studio for Mac icon that will debut in forthcoming Previews.

Easier to launch your code

start window for Visual Studio for Mac (left) and Visual Studio (right)

Through research and observation, we identified opportunities to simplify the choices that you must make during the most crucial steps of getting started with Visual Studio. We realized we needed to remove what we call “off-ramps” from the experience and provide you with the best paths forward to your code.

Whether you’re new to Visual Studio or a seasoned Visual Studio developer, the new start window gives you rapid access to the most common ways that developers access their code: cloning or checking out code, opening a project or solution, opening a local folder on PC, and creating a new project.

We know how important the list of recent projects and folders from the current IDE Start Page in Visual Studio 2017 is for you (more than 90 percent of you who use the Start Page also use the recent project lists), so we made sure to maintain its position as a focal point in the experience.

You’ll also find a new, streamlined, Git-first workflow that lets you clone public Git repos with just a few clicks.

Finally, we also reimagined the experience of creating a new project, with a new list of the most popular templates and improved search and filter capabilities. With the new design and step-by-step approach for selecting a template and configuring it, we believe that we have made it less overwhelming so that you can focus on a single decision at a time. You will also be able to explore other languages, platforms, and project types that Visual Studio supports and eventually be able to install them right from there.

A refreshed blue theme

The refreshed blue theme (left) next to the current blue theme (right)

One of the most noticeable visual impacts you may see when you run Visual Studio 2019 is our updated blue theme. More than half of you use the blue theme, but it’s looked the same since Visual studio 2012. We focused our changes around a desire to declutter the Visual Studio UI. By softening the edges around our icon buttons and toolbars, as well as tool-windows, we can bring forward the focus of what you’re working on. We’ve made small changes across the whole UI, which add up to a cleaner interface while still meeting our accessibility standards. We started with the blue theme so that we can get these updates in front of you, learn from your feedback, and then apply it across our other themes.

Productivity at your fingertips

The current commanding space (top) and the simplified version for Visual Studio 2019 (bottom)

Looking for opportunities to broaden the focus of the code and remove clutter, we started with the vertical space. By removing the title bar, we took the opportunity to reassess the uppermost layout of Visual Studio without drastically changing your workflow. We have moved the search UI to increase discoverability. With the upcoming preview releases and updates, you’ll find that search in Visual Studio 2019 is more powerful and accurate.

We now have a focused location for team collaboration using Live Share in the title bar. Grouped together close to the user account signed in to Visual Studio, it’s now easier to see who you’re collaborating with. This is built into all editions of Visual Studio. We’ve also taken the time to clean up the default iconography to align it better with Windows.

These small changes allow us to reclaim vital space in the IDE … allowing for larger tool windows, more space for your code, and faster access … to the tools and commands that matter to you.

Noticeable notifications

The new location, style, and icon for notifications for Visual Studio 2019. Coming in future Previews.

Early next year, one of our Preview releases will include an update to the notifications UI. Through conversations with you, we’ve heard that the current notifications location, icon, and states have been unclear to you for some time. To tackle this, we’re moving the entry point for notifications to the status bar at the bottom of the IDE. This new position avoids disruptive UI breaking your concentration, but sets us up for displaying messages from a variety of different services (from the status of a Live Share to a Pull Request comment) in the future. We’re also updating the icon from a flag to a bell, based on your comments.

An ongoing conversation

We’re excited to share these changes that we’ve been working on with you, and we’d love to hear your thoughts about our new designs, so please leave a comment below. You can also suggest a feature or file a bug in our Developer Community. We want to make Visual Studio better with every update and your feedback is critical.

Jamie Young, Group Principal Design Manager
@jamiedyoung

Jamie runs the Design Team in the Developer Tools Division of Microsoft. He has been designing all sorts of things for over 15 years and has built up an unhealthy interest in complex problems, which sits well with his current job.

Salı, 13 Kasım 2018 / Published in Uncategorized

Visual Studio IntelliCode is a set of AI-assisted capabilities that aims to improve developer productivity with features like AI-assisted IntelliSense and statement completion, code formatting, and style rule inference. During SpringOne 2018, we announced that we will bring those productivity boosters to Java developers and now we’re happy to introduce AI-assisted IntelliSense to Java in the IntelliCode Extension for Visual Studio Code.

IntelliCode saves you time by putting the most relevant suggestions at the top of your completion list. IntelliCode recommendations are based on thousands of open source projects on GitHub, each with over 100 stars, so it’s trained on most popular usage patterns and practices. When combined with the context of your code, the completion list is tailored to promote those practices.

Check out the animation below to see IntelliCode for Java in action.

You may have noticed that IntelliCode provides most relevant IntelliSense recommendations based on your current code context, especially within conditional blocks. IntelliCode works well with popular Java libraries and frameworks like Java SE platform and Spring framework. It will help you whether you are doing monolithic or modern microservices architecture.

Exploring and managing your Java project

You speak, we listen. Some of the most frequent feedback requests we received from developers on Visual Studio Code are the lack of a package view, dependency management and project creation. Thus, we’ve built a new extension to provide those features – Java Dependencies.

See below for package and dependency view.

And create a simple Java project.

Spring Tool 4 available for Visual Studio Code

During SpringOne 2018, Pivotal announced the release of their brand new Spring Tool 4 built on top of the Language Server Protocol developed by Visual Studio Code team, and it’s now available for Visual Studio Code, Eclipse and Atom. Pivotal and Microsoft presented sessions to promote that during both SpringOne and Oracle Code One.

Along with Spring Initializr and Spring Boot Dashboard, now you can easily create new Spring Boot applications, navigate your source code, have smart code editing, see runtime live information in your editor and manage your running application, all within Visual Studio Code.

View the recording of Hacking Spring Boot Applications Using Visual Studio Code to learn more.

More improvements for Java in Visual Studio Code

There’s also lots of additional new features added to our Java on Visual Studio Code extension lineup, including

Debugger for Java

  1. Use code lens to run Java program in a much simpler way.
  2. Add support for Logpoints.
  3. Add a troubleshooting page for common errors.
  4. Support starting without debugging.
  5. Add new user settings java.debug.settings.enableRunDebugCodeLens to enable/disable Run|Debug Code Lenses on main methods #464 (Thank you Thad House!)
  6. Add Italian translation for extension configuration #463 (Thank you Julien Russo!)

Tomcat

  1. Support right click on exploded WAR folder to run it directory Tomcat Server
  2. Support right click on exploded WAR folder to debug it directory on Tomcat Server
  3. Add command “Generate WAR Package from Current Folder”

Maven

  1. Supported to fast re-run maven command from history. Added entry for historical commands in context menu.
  2. Supported to trigger maven command from command palette.
  3. Supported to hide Maven explorer view by default. #51
  4. Started to use a separate terminal for each root folder. #68
  5. Supported to update explorer automatically when workspace folders change.

With the help from Language Support for Java by Red Hat, we now have better support for newer versions of Java (9, 10, and 11), better integrations with the editor (outline, go to implementation), more code actions (convert var to type and vice versa, convert to lambda expression) and various other enhancements.

Provide feedback

Your feedback and suggestions are especially important to us and will help shape our products in future. Please help us by taking this survey to share your thoughts!

Try it out

Please don’t hesitate to have a try on using Visual Studio Code for your Java development and let us know your feelings! Visual Studio Code is a lightweight and performant code editor with great Java support especially for building microservices.

Install the Java Extension Pack which including Language Support for Java by Red Hat, Debugger for Java, Maven and Java Test Runner.

Xiaokai He, Program Manager
@XiaokaiHe

Xiaokai is a program manager working on Java tools and services. He’s currently focusing on making Visual Studio Code great for Java developers, as well as supporting Java in various of Azure services.

Salı, 13 Kasım 2018 / Published in Uncategorized

Building C# 8.0

The next major version of C# is C# 8.0. It’s been in the works for quite some time, even as we built and shipped the minor releases C# 7.1, 7.2 and 7.3, and I’m quite excited about the new capabilities it will bring.

The current plan is that C# 8.0 will ship at the same time as .NET Core 3.0. However, the features will start to come alive with the previews of Visual Studio 2019 that we are working on. As those come out and you can start trying them out in earnest, we will provide a whole lot more detail about the individual features. The aim of this post is to give you an overview of what to expect, and a heads-up on where to expect it.

New features in C# 8.0

Here’s an overview of the most significant features slated for C# 8.0. There are a number of smaller improvements in the works as well, which will trickle out over the coming months.

Nullable reference types

The purpose of this feature is to help prevent the ubiquitous null reference exceptions that have riddled object-oriented programming for half a century now.

It stops you from putting null into ordinary reference types such as string – it makes those types non-nullable! It does so gently, with warnings, not errors. But on existing code there will be new warnings, so you have to opt in to using the feature (which you can do at the project, file or even source line level).

string s = null; // Warning: Assignment of null to non-nullable reference type

What if you do want null? Then you can use a nullable reference type, such as string?:

string? s = null; // Ok

When you try to use a nullable reference, you need to check it for null first. The compiler analyzes the flow of your code to see if a null value could make it to where you use it:

void M(string? s)
{
    Console.WriteLine(s.Length); // Warning: Possible null reference exception
    if (s != null)
    {
        Console.WriteLine(s.Length); // Ok: You won't get here if s is null
    }
}

The upshot is that C# lets you express your “nullable intent”, and warns you when you don’t abide by it.

Async streams

The async/await feature of C# 5.0 lets you consume (and produce) asynchronous results in straightforward code, without callbacks:

async Task<int> GetBigResultAsync()
{
    var result = await GetResultAsync();
    if (result > 20) return result; 
    else return -1;
}

It is not so helpful if you want to consume (or produce) continuous streams of results, such as you might get from an IoT device or a cloud service. Async streams are there for that.

We introduce IAsyncEnumerable<T>, which is exactly what you’d expect; an asynchronous version of IEnumerable<T>. The language lets you await foreach over these to consume their elements, and yield return to them to produce elements.

async IAsyncEnumerable<int> GetBigResultsAsync()
{
    await foreach (var result in GetResultsAsync())
    {
        if (result > 20) yield return result; 
    }
}

Ranges and indices

We’re adding a type Index, which can be used for indexing. You can create one from an int that counts from the beginning, or with a prefix ^ operator that counts from the end:

Index i1 = 3;  // number 3 from beginning
Index i2 = ^4; // number 4 from end
int[] a = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
Console.WriteLine($"{a[i1]}, {a[i2]}"); // "3, 6"

We’re also introducing a Range type, which consists of two Indexes, one for the start and one for the end, and can be written with a x..y range expression. You can then index with a Range in order to produce a slice:

var slice = a[i1..i2]; // { 3, 4, 5 }

Default implementations of interface members

Today, once you publish an interface it’s game over: you can’t add members to it without breaking all the existing implementers of it.

In C# 8.0 we let you provide a body for an interface member. Thus, if somebody doesn’t implement that member (perhaps because it wasn’t there yet when they wrote the code), they will just get the default implementation instead.

interface ILogger
{
    void Log(LogLevel level, string message);
    void Log(Exception ex) => Log(LogLevel.Error, ex.ToString()); // New overload
}

class ConsoleLogger : ILogger
{
    public void Log(LogLevel level, string message) { ... }
    // Log(Exception) gets default implementation
}

The ConsoleLogger class doesn’t have to implement the Log(Exception) overload of ILogger, because it is declared with a default implementation. Now you can add new members to existing public interfaces as long as you provide a default implementation for existing implementors to use.

Recursive patterns

We’re allowing patterns to contain other patterns:

IEnumerable<string> GetEnrollees()
{
    foreach (var p in People)
    {
        if (p is Student { Graduated: false, Name: string name }) yield return name;
    }
}

The pattern Student { Graduated: false, Name: string name } checks that the Person is a Student, then applies the constant pattern false to their Graduated property to see if they’re still enrolled, and the pattern string name to their Name property to get their name (if non-null). Thus, if p is a Student, has not graduated and has a non-null name, we yield return that name.

Switch expressions

Switch statements with patterns are quite powerful in C# 7.0, but can be cumbersome to write. Switch expressions are a “lightweight” version, where all the cases are expressions:

var area = figure switch 
{
    Line _      => 0,
    Rectangle r => r.Width * r.Height,
    Circle c    => c.Radius * 2.0 * Math.PI,
    _           => throw new UnknownFigureException(figure)
};

Target-typed new-expressions

In many cases, when you’re creating a new object, the type is already given from context. In those situations we’ll let you omit the type:

Point[] ps = { new (1, 4), new (3,-2), new (9, 5) }; // all Points

The implementation of this feature was contributed by a member of the community. Thank you!

Platform dependencies

Most of the C# 8.0 language features will run on any version of .NET. However, a few of them have platform dependencies.

Async streams, indexers and ranges all rely on new framework types that will be part of .NET Standard 2.1. As Immo describes in his post Announcing .NET Standard 2.1, .NET Core 3.0 as well as Xamarin, Unity and Mono will all implement .NET Standard 2.1, but .NET Framework 4.8 will not. This means that the types required to use these features won’t be available when you target C# 8.0 to .NET Framework 4.8.

As always, the C# compiler is quite lenient about the types it depends on. If it can find types with the right names and shapes, it is happy to target them.

Default interface member implementations rely on new runtime enhancements, and we will not make those in the .NET Runtime 4.8 either. So this feature simply will not work on .NET Framework 4.8 and on older versions of .NET.

The need to keep the runtime stable has prevented us from implementing new language features in it for more than a decade. With the side-by-side and open-source nature of the modern runtimes, we feel that we can responsibly evolve them again, and do language design with that in mind. Scott explained in his Update on .NET Core 3.0 and .NET Framework 4.8 that .NET Framework is going to see less innovation in the future, instead focusing on stability and reliability. Given that, we think it is better for it to miss out on some language features than for nobody to get them.

How can I learn more?

The C# language design process is open source, and takes place in the github.com/dotnet/csharplang) repo. It can be a bit overwhelming and chaotic if you don’t follow along regularly. The heartbeat of language design is the language design meetings, which are captured in the C# Language Design Notes.

About a year ago I wrote a post Introducing Nullable Reference Types in C#. It should still be an informative read.

You can also watch videos such as The future of C# from Microsoft Build 2018, or What’s Coming to C#? from .NET Conf 2018, which showcase several of the features.

Kathleen has a great post laying out the plans for Visual Basic in .NET Core 3.0.

As we start releasing the features as part of Visual Studio 2019 previews, we will also publish much more detail about the individual features.

Personally I can’t wait to get them into the hands of all of you!

Happy hacking,

Mads Torgersen, Design Lead for C#

Salı, 13 Kasım 2018 / Published in Uncategorized

Hello and welcome

Today I wanted to talk about extending your application and your DbContext to run arbitrary code when a save occurs.

The backstory

While working with quite a few applications that work with databases especially using entity framework, I noticed the pattern of saving changes to the database and then do something else based on those changes. A few examples of that are as follows:

  • When the user state changes, reflect that in the UI.
  • When adding or updating a product, update the stock.
  • When deleting an entity then do another action like check for validity.
  • When an entity changes in any way (add, update, delete), send that out to an external service.

These are mostly akin to having database triggers when the data changes, some action needs to be performed, but those actions are not always database related, more as a response to the change in the database, which sometimes it is just business logic.

As such, in one of these applications, I found a way to incorporate that behavior and clean up the repetitive code that would follow, while also keeping it maintainable by just registering the triggers into the IoC container of Asp.Net core.

In this post we will be having a look at the following:

  • How to extend the DbContext to allow for the triggers.
  • How to register multiple instances into the container using the same interface or base class.
  • How to create entity instances from tracked changes so we can work with concrete items.
  • How to limit our triggers to only fire under certain data conditions.
  • Injecting dependencies into our triggers.
  • Avoiding infinite loops in our triggers.

We have a long enough road ahead to let’s get started.

Creating the triggers framework

ITrigger interface

We will start off with the root of our triggers and that is the ITrigger interface.

using Microsoft.EntityFrameworkCore.ChangeTracking;

public interface ITrigger
{
    void RegisterChangedEntities(ChangeTracker changeTracker);
    Task TriggerAsync();
}
  • The RegisterChangedEntities method accepts a ChangeTracker so that if need be, we can store the changes that happened for later use.
  • The TriggerAync method actually runs our logic, the reason why these two are separate we will see when we will do the changes to the DbContext.

TriggerBase base class

Next, off we will be looking at a base class that is not mandatory though it does exist for two main reasons:

  1. To house the common logic of the triggers, including the state of the tracked entities.
  2. To be able to filter out trigger based on the entity they are meant for.
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore.ChangeTracking;

public abstract class TriggerBase<T> : ITrigger
{
    protected IEnumerable<TriggerEntityVersion<T>> TrackedEntities;

    protected abstract IEnumerable<TriggerEntityVersion<T>> RegisterChangedEntitiesInternal(ChangeTracker changeTracker);

    protected abstract Task TriggerAsyncInternal(TriggerEntityVersion<T> trackedTriggerEntity);

    public void RegisterChangedEntities(ChangeTracker changeTracker)
    {
        TrackedEntities = RegisterChangedEntitiesInternal(changeTracker).ToArray();
    }

    public async Task TriggerAsync()
    {
        foreach (TriggerEntityVersion<T> triggerEntityVersion in TrackedEntities)
        {
            await TriggerAsyncInternal(triggerEntityVersion);
        }
    }
}

Let’s break it down member by member and understand what’s with this base class:

  1. The class is a generic type of T, this ensures that the logic that will be running in any of its descendants will only apply to a specific entity that we want to run our trigger against.
  2. The protected TrackedEntities field holds on to the changed entities, both before and after the change so we can run our trigger logic against them.
  3. The abstract method RegisterChangedEntitiesInternal will be overridden in concrete implementations of this class and ensures that give a ChangeTracker it will return a set of entities we want to works against. This is not to say that it cannot return an empty collection, it’s just that if we opt to implement a trigger via the TriggerBase class, then it’s highly likely we would want to hold onto those instances for later use.
  4. The abstract method TriggerAsyncInternal runs our trigger logic against n entity we saved from the collection.
  5. The public method RegisterChangedEntities ensures that the abstract method RegisterChangedEntitiesInternal is called, then it calls .ToArray() to ensure that if we have an IEnumerable query, that it also actually executes so that we don’t end up with a collection that is updated later on in the process in an invalid state. This is mostly a judgment call on my end because it is easy to forget that IEnumerable queries have a deferred execution mechanic.
  6. The public method TriggerAsync just enumerates over all of the entities calling TriggerAsyncInternal on each one.

Now that we discussed the base class, it’s time we move on to the definition of a TriggerEntityVersion

The TriggerEntityVersion class

The TriggerEntityVersion class is a helper class that serves the purpose of housing the old and the new instance of a given entity.

using System.Linq;
using System.Reflection;
using Microsoft.EntityFrameworkCore.ChangeTracking;

public class TriggerEntityVersion<T>
{
    public T Old { get; set; }
    public T New { get; set; }

    public static TriggerEntityVersion<TResult> CreateFromEntityProperty<TResult>(EntityEntry<TResult> entry) where TResult : class, new()
    {
        TriggerEntityVersion<TResult> returnedResult = new TriggerEntityVersion<TResult>
        {
            New = new TResult(),
            Old = new TResult()
        };

        foreach (PropertyInfo propertyInfo in typeof(TResult)
                                                 .GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.GetProperty)
                                                 .Where(pi => entry.OriginalValues.Properties.Any(property => property.Name == pi.Name)))
        {
            if (propertyInfo.CanRead && (propertyInfo.PropertyType == typeof(string) || propertyInfo.PropertyType.IsValueType))
            {
                propertyInfo.SetValue(returnedResult.Old, entry.OriginalValues[propertyInfo.Name]);
            }
        }

        foreach (PropertyInfo propertyInfo in typeof(TResult)
                                                .GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.GetProperty)
                                                .Where(pi => entry.OriginalValues.Properties.Any(property => property.Name == pi.Name)))
        {
            if (propertyInfo.CanRead && (propertyInfo.PropertyType == typeof(string) || propertyInfo.PropertyType.IsValueType))
            {
                propertyInfo.SetValue(returnedResult.New, entry.CurrentValues[propertyInfo.Name]);
            }
        }

        return returnedResult;
    }
}

The breakdown for this class is as follows:

  1. We have two properties of the same type, one representing the Old instance before any modifications were made and the other representing the New state after the modifications have been made.
  2. The factory method CreateFromEntityProperty uses reflection so that we can turn an EntityEntry which into our own entity so it’s easier to work with, since an EntityEntry is not something so easy to interrogate and work with, this will create instances of our entity and copy over the original and current values that are being tracked, but only if they can be written to and are strings or value types (since classes would represent other entities most of the time, excluding owned properties). Additionally, we only look at the properties being tracked.

We will see an example of how this is used in the following section where we see how to implement concrete triggers.

Concrete triggers

We will be creating two triggers to show off how they can differ and also how to register multiple triggers later on when we do the integration into the ServiceProvider.

Attendance trigger

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using DbBroadcast.Models; // this is just to point to the `TriggerEntityVersion`, will differ in your system
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.ChangeTracking;
using Microsoft.Extensions.Logging;

public class AttendanceTrigger : TriggerBase
{
    private readonly ILogger _logger;

    public AttendanceTrigger(ILogger logger)
    {
        _logger = logger;
    }

    protected override IEnumerable RegisterChangedEntitiesInternal(ChangeTracker changeTracker)
    {
        return changeTracker
                .Entries()
                .Where(entry => entry.State == EntityState.Modified)
                .Select(TriggerEntityVersion.CreateFromEntityProperty);
    }

    protected override Task TriggerAsyncInternal(TriggerEntityVersion trackedTriggerEntity)
    {
        _logger.LogInformation($"Update attendance for user {trackedTriggerEntity.New.Id}");
            return Task.CompletedTask;
    }
}

From the definition of this trigger we can see the following:

  1. This trigger will apply for the entity ApplicationUser.
  2. Since the instance of the trigger is created via ServiceProvider we can inject dependencies via its constructor as we did with the ILogger.
  3. The RegisterChangedEntitiesInternal method implements a query on the tracked entities of type ApplicationUser only if they have been modified. We could check for additional conditions but I would suggest doing that after the .Select call so that you can work with actual instances of your entity.
  4. The TriggerAsyncInternal implementation will just log out the new Id of the user (or any other field we might want).

Ui trigger

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore.ChangeTracking;
using Microsoft.Extensions.Logging;

using DbBroadcast.Models;

public class UiTrigger : TriggerBase<ApplicationUser>
{
    private readonly ILogger<AttendanceTrigger> _logger;

    public UiTrigger(ILogger<AttendanceTrigger> logger)
    {
        _logger = logger;
    }

    protected override IEnumerable<TriggerEntityVersion<ApplicationUser>> RegisterChangedEntitiesInternal(ChangeTracker changeTracker)
    {
        return changeTracker.Entries<ApplicationUser>().Select(TriggerEntityVersion<ApplicationUser>.CreateFromEntityProperty);
    }

    protected override Task TriggerAsyncInternal(TriggerEntityVersion<ApplicationUser> trackedTriggerEntity)
    {
        _logger.LogInformation($"Update UI for user {trackedTriggerEntity.New.Id}");;
        return Task.CompletedTask;
    }
}

This class is the same as the previous one, this more for example purposes, except it has a different message and also it will track all changes to ApplicationUser entities regardless of their state.

Registering the triggers

Now that we have written up our triggers it’s time to register them. To register multiple implementations of the same interface or base class, all we need to do is make a change in the Startup.ConfigureServices method (or wherever you’re registering your services) as follows:

services.TryAddEnumerable(new []
{
    ServiceDescriptor.Transient<ITrigger, AttendanceTrigger>(), 
    ServiceDescriptor.Transient<ITrigger, UiTrigger>(), 
});

This way you can have triggers of differing lifetimes, as many as you want (though they should be in line with the lifetime of your context, else you will get an error), and easy to maintain. You could even have a configuration file to enable at will certain triggers :D.

Modifing the DbContext

Here I will show two cases which can be useful depending on your requirement. You will also see that the implementation is the same, de difference being a convenience since for simple cases all you need to do is inherit, for complex cases you would need to make these changes manually.

Use a base class

If your context only inherits from DbContext then you could you the following base class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using DbBroadcast.Data.Triggers;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;

public abstract class TriggerDbContext : DbContext
{
    private readonly IServiceProvider _serviceProvider;

    public TriggerDbContext(DbContextOptions<ApplicationDbContext> options, IServiceProvider serviceProvider)
        : base(options)
    {
        _serviceProvider = serviceProvider;
    }

    public override async Task<int> SaveChangesAsync(CancellationToken cancellationToken = new CancellationToken())
    {
        IEnumerable<ITrigger> triggers =
            _serviceProvider?.GetServices<ITrigger>()?.ToArray() ?? Enumerable.Empty<ITrigger>();

        foreach (ITrigger userTrigger in triggers)
        {
            userTrigger.RegisterChangedEntities(ChangeTracker);
        }

        int saveResult = await base.SaveChangesAsync(cancellationToken);

        foreach (ITrigger userTrigger in triggers)
        {
            await userTrigger.TriggerAsync();
        }

        return saveResult;
    }
}

Things to point out here are as follows:

  1. We inject the IServiceProvider so that we can reach out to our triggers.
  2. We override the SaveChangesAsync (same would go for all the other save methods of the context, though this one is the most used nowadays) and implement the changes.
    1. We get the triggers from the ServiceProvider (we could even filter them out for a specific trigger type but it’s better to have them as is cause it keeps it simple)
    2. We run through each trigger and save the entities that have changes according to our trigger registration logic.
    3. We run the actual save inside the database to ensure that everything worked properly (is there’s a database error then the trigger would get canceled due to the exception bubbling)
    4. We then run each trigger.
    5. We return the result as if nothing happened :D.

Keep in mind that given this implementation you wouldn’t want to have a trigger that updates the same entity or you might end up in a loop, so either you must have firm rules for your trigger or just don’t change the same entity inside the trigger.

Using your existing context

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using DbBroadcast.Data.Triggers;
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore;
using DbBroadcast.Models;
using Microsoft.Extensions.DependencyInjection;

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
    private readonly IServiceProvider _serviceProvider;

    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options, IServiceProvider serviceProvider)
        : base(options)
    {
        _serviceProvider = serviceProvider;
    }

    public override async Task<int> SaveChangesAsync(CancellationToken cancellationToken = new CancellationToken())
    {
        IEnumerable<ITrigger> triggers =
            _serviceProvider?.GetServices<ITrigger>()?.ToArray() ?? Enumerable.Empty<ITrigger>();

        foreach (ITrigger userTrigger in triggers)
        {
            userTrigger.RegisterChangedEntities(ChangeTracker);
        }

        int saveResult = await base.SaveChangesAsync(cancellationToken);

        foreach (ITrigger userTrigger in triggers)
        {
            await userTrigger.TriggerAsync();
        }

        return saveResult;
    }
}

As you can see this is nearly identical to the base class but since this context already inherits from IdentityDbContext then you have you implement your own.

To implement your own you need to both update your constructor to accept a ServiceProvider and override the appropriate save methods.

Conclusion

For this to work we’ve taken advantage of inheritance, the strategy pattern for the triggers, playing with the ServiceProvider and multiple registrations.

I hope you enjoyed this as much as I did tinkering with it, and I’m curious to find out what kinda trigger you might come up with.

Thank you and happy coding, Vlad V.

Salı, 13 Kasım 2018 / Published in Uncategorized

Introduction

I was working in a Xamarin.Form application that has a main application and services running at the same time. They share some features that need to be singleton. I needed to find a way to organize the container to be accessible by all of them.

Background

This link gives you an introduction about DI in Xamarin and Autofac. To have more details about how to install and get started with Autofac, you can check this link.

Using the code

In order to organize the container in a single place, I created a new class DIContainer for the container. This class registers the services and exposes them as public properties.

using Autofac;

public static class DIContainer 
{ 
   private static IContainer _container;

   public static IServiceA ServiceA { get { return _container.Resolve<IServiceA>(); } } 
   public static IServiceB ServiceB { get { return _container.Resolve<IServiceB>(); } } 

   public static void Initialize() 
   { 
     if (_container == null) 
     { 
        var builder = new ContainerBuilder(); 
        builder.RegisterType<ServiceA>().As<IServiceA>().SingleInstance(); 
        builder.RegisterType<ServiceB>().As<IServiceB>().SingleInstance(); 
        _container = builder.Build(); 
      }
    }
 }

To use a service, each application or service running must firstly initialize the container calling the method Initialize. After that,they can access the services through the public properties.

In my case, I add the following call in the App.xaml.cs of my application:

DIContainer.Initialize();

Later, each page should use the public properties of the container to get the reference to the service needed and pass it to their view model if required.

IServiceA service = DIContainer.ServiceA;