**Problem Description:** While GPS enabled devices can guide you path on street or on roads. These devices are limited in their usage because these can’t help in walking thorough the enclosed spaces like rooms etc. and thus limitation for blind people to walk in enclosed space.? We come up with an idea to model first person walk in rooms to come out from the rooms or enclosed spaces like whenever any person sees a wall in front of him he moves to right or left and fine tune his direction and get out of the room and avoids wall collisions. So we have collected data to guide a person especially blind person to go out of room doors. The typical workflow consists of approach of Computer vision in the following way: The data for first person walking simulator is gathered in a game environment where room’s maps are created and images are captured and labelled walk by Forward, Left and Right classes. Typically human walk is more complicated, but we in this stage only collected three type of action. In the aim to predict the direction to come out of the room. **Content:** A human first recognize the view in front of him and then decides an action by taking Step in forward, sideways, left , right or more complex movement. The same is done during data gathering stage i.e. we captured image first and labelled as Forward if agent can move forward or right if he need to turn right to go out of the door/room. The whole community of deep learning is invited to help us in walking of disabled person. ---