Thursday, 31 December 2015

RTSP RTCP and RTP

RTSP(Real-Time Stream Protocol )Is an application layer protocol based on the text, in the aspect of grammarand some message parameters, similar to the RTSP protocol and HTTP protocol.
RTSP is used to transfer control media flow is established, it plays the role of "remote control" for multimedia services. RTSP itself is not used for streaming data. Delivery of media data can be done by RTP/RTCP protocol.
The basic process of RTSP operation
First of all, the client connects to the streaming server and send an OPTIONS command received from the server response query method provided by the server, sending DESCRIBE commands to query a media file of SDP information. The streaming server through a SDP description of response, response information including the flow quantity, the media type and other information. The client analysis described in the SDP, and for each flow sends aSETUP command in SETUP session, the command tells the server to the client for receiving media data port. Streaming media connection setup is completed, the client sends a PLAY command, the server begins streaming data. During playback the client can play to the server to send PAUSE and other command and control flow. Communication is completed, the client can send the TERADOWN command to end a streaming session.
Is a complete client and server through Wireshark capture by RTSP interaction. Black font indicates that the client request, red font server response.

OPTIONS rtsp://10.34.3.80/D:/a.264 RTSP/1.0

CSeq: 2

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)

RTSP/1.0 200 OK

CSeq: 2

Date: Tue, Jul 22 2014 02:41:21 GMT

Public: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, GET_PARAMETER, SET_PARAMETER

DESCRIBE rtsp://10.34.3.80/D:/a.264 RTSP/1.0

CSeq: 3

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)

Accept: application/sdp

RTSP/1.0 200 OK

CSeq: 3

Date: Tue, Jul 22 2014 02:41:21 GMT

Content-Base: rtsp://10.34.3.80/D:/a.264/

Content-Type: application/sdp

Content-Length: 494

 

v=0

o=- 1405995833260880 1 IN IP4 10.34.3.80

s=H.264 Video, streamed by the LIVE555 Media Server

i=D:/a.264

t=0 0

a=tool:LIVE555 Streaming Media v2014.07.04

a=type:broadcast

a=control:*

a=range:npt=0-

a=x-qt-text-nam:H.264 Video, streamed by the LIVE555 Media Server

a=x-qt-text-inf:D:/a.264

m=video 0 RTP/AVP 96

c=IN IP4 0.0.0.0

b=AS:500

a=rtpmap:96 H264/90000

a=fmtp:96 packetization-mode=1;profile-level-id=42001E;sprop-parameter-sets=Z0IAHpWoLQSZ,aM48gA==

a=control:track1

SETUP rtsp://10.34.3.80/D:/a.264/track1 RTSP/1.0

CSeq: 4

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)

Transport: RTP/AVP;unicast;client_port=60094-60095

RTSP/1.0 200 OK

CSeq: 4

Date: Tue, Jul 22 2014 02:41:25 GMT

Transport: RTP/AVP;unicast;destination=10.34.3.80;source=10.34.3.80;client_port=60094-60095;server_port=6970-6971

Session: 54DAFD56;timeout=65

PLAY rtsp://10.34.3.80/D:/a.264/ RTSP/1.0

CSeq: 5

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)

Session: 54DAFD56

Range: npt=0.000-

RTSP/1.0 200 OK

CSeq: 5

Date: Tue, Jul 22 2014 02:41:25 GMT

Range: npt=0.000-

Session: 54DAFD56

RTP-Info: url=rtsp://10.34.3.80/D:/a.264/track1;seq=10244;rtptime=2423329550

GET_PARAMETER rtsp://10.34.3.80/D:/a.264/ RTSP/1.0

CSeq: 6

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)

Session: 54DAFD56

RTSP/1.0 200 OK

CSeq: 6

Date: Tue, Jul 22 2014 02:41:25 GMT

Session: 54DAFD56

Content-Length: 10

 

//Termination

TEARDOWN rtsp://10.34.3.80/D:/a.264/ RTSP/1.0

CSeq: 7

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)

Session: 54DAFD56

RTSP/1.0 200 OK

CSeq: 7

Date: Wed, Jul 30 2014 07:13:35 GMT

Can be found in the format of RTSP protocol and HTTP protocol is very similar, are text based protocol, grammar is basically the same. But they are not the same, the main difference:
Firstly, the method name is different. RTSP added DESCRIBE, SETUP, PLAY etc.
Secondly, the HTTP protocol is stateless protocol, sending no method between the apparent order relation. While the RTSP is a state of the protocol, the existing order relation method.
In HTTP protocol, data can be transmitted within the belt load data, such as Webpage data. While the RTSP only provides streaming control, does not deliver streaming media data. Streaming media data can be transmitted by way of RTP/RTCP.
Two, the RTSP message
1 RTSP request message format
The method name URL RTSP version CRLF
Message header CRLF CRLF
Enter the message body
Method names including OPTIONS, DESCRIBE, SETUP, PLAY, TEARDOWN etc.
URL is the recipient's address, such as: RTSP://192.168.0.1/video1.3gp.
The RTSP version is RTSP/1.0
The message of each will end with a newline, in order to facilitate the identification of the message header, the last line of the two carriage return.
The message body sometimes is optional.
2 response message format
RTSP version of the state code corresponding to the text interpretation newline
Message header CRLF CRLF
Enter the message body
The RTSP version is RTSP/1.0.
Status code indicates the corresponding message execution results.
Partial status code and text explanation list as follows:
The status code text explanation
"200" The success of OK implementation
"400" Bad Request error request
"404" Not Found not found
"500" Internal Internal Server Error server error
3 different methods in detail
(1)OPTIONS
The client uses OPTION to query the server to provide. The server in the public field are given to provide their own set of methods. From the above data can be seen in this server provides OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, GET_PARAMETER, SET_PARAMETER, PAUSE,, etc.
The OPTIONS method is not necessary. The client can bypass the OPTIONS, directly to the server to send other messages.
The CSeq field represents the request number. Each client request will be assigned a number. Each request message will correspond to a same ordinal response message.
The OPTIONS message can be sent at any time. Some clients will timing OPTION to send the message to the server. While the server can also be whether the timing of received OPTIONS messages through judging whether the client online. But not all clients and servers to do so.
User Agent
The domain for user identification. Different companies or different client. The domain of different client message of content are not the same. Sometimes indicates the client version number, model and so on.
Using VLC as the client specified in the field below the dialogue, and to publish the number and use the LIVE555 version of the library.
OPTIONS rtsp://10.34.3.80/D:/a.264 RTSP/1.0

CSeq: 2                                                                                              //Request number

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)

RTSP/1.0 200 OK

CSeq: 2                                                                                              //Reply No.

Date: Tue, Jul 22 2014 02:41:21 GMT

Public: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, GET_PARAMETER, SET_PARAMETER

(2)DESCRIBE
The DESCRIBE message is sent by the client to the server for the client, get relevant description request media file specified in the link, is generally SDP information. SDP(Session Description Protocol)Contains a description, media encoding type, media session rate and other information. For streaming media services, the following domain is in the SDP must contain.
"a=control:"
"a=range:"
"a=rtpmap:"
"a=fmtp:"
When a video contains audio also includes video, there will be more than one of the above structure. Each media description starts with M. The green and yellow background font respectively on the video and audio mediadescription. The Accept field in the request is used to specify the client can receive media description information types, here for the SDP information.
DESCRIBE rtsp://10.34.3.80/D:/a.264 RTSP/1.0

CSeq: 3

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)

Accept: application/sdp                                //Request access to SDP information

RTSP/1.0 200 OK

CSeq: 3

Date: Tue, Jul 22 2014 02:41:21 GMT

Content-Base: rtsp://10.34.3.80/D:/a.264/                   //Specify a media description information

Content-Type: application/sdp                                   //The type of request

Content-Length: 494                                                //The length of SDP

 

v=0                                            //Version SDP protocol version

o=- 1405995833260880 1 IN IP4 10.34.3.80            //Origion session originator information

s=H.264 Video, streamed by the LIVE555 Media Server  //The session name

 

i=D:/a.264                                                              //Description information session

t=0 0                                                                      //The session start and end time

a=tool:LIVE555 Streaming Media v2014.07.04          //Attribute

a=type:broadcast

a=control:*                                                             //Control information

a=range:npt=0-

a=x-qt-text-nam:H.264 Video, streamed by the LIVE555 Media Server

a=x-qt-text-inf:D:/a.264

m=video 0 RTP/AVP 96            //Support the sender media types (video) information

c=IN IP4 0.0.0.0               //Session connection information, spending real media stream using the IP address. 

b=AS:500                                //Video bandwidth

a=rtpmap:96 H264/90000          //The media attribute, video (H264 video format, 90000 sampling rate)

a=fmtp:96 packetization-mode=1;profile-level-id=42001E;sprop-parameter-sets=Z0IAHpWoLQSZ,aM48gA==

a=control:track1                       //The video using track 1

 

m=audio 0 RTP/AVP 97         //Media type (audio) following are to describe the information of the audio

b=AS:19                              //Audio bandwidth

a=rtpmap:97 MP4A-LATM/11025/1     //Video format (MP4A-LATM video format, 11025 sampling rate)

a=fmtp:97 profile-level-id=15; object=2; cpresent=0; config=40002A103FC0

a=mpeg4-esid:101=x-envivio-verid:00011118

a=control:trackID=2               /The audio using track 2

M is also called the media, described the sender support media type and other information, to explain in detail.
m=audio 3458 RTP/AVP 0 96 97
The first parameter to audio for media name: show the audio type.
The second parameter is the port number 3488, showed that UE in the local port to send 3458 audio stream.
The third parameter RTP/AVP as the transport protocol, RTP protocol based on UDP.
Fourth to seven parameters for the four kinds of payload type number support.
a=rtpmap:0 PCMU
a=rtpmap:96 G726-32/8000
a=rtpmap:97 AMR-WB
Properties for a media, to attribute the name: attribute value method.
Format: a=rtpmap:<payload type > <encoding name>
Payload type 0 fixed assigned to PCMU,
Coding scheme payload type 96 corresponding to G.726, dynamic distribution.
Payload type 97 corresponding encoding for the adaptive multi rate wideband coding (AMR-WB), dynamic allocation.
For the video
m=video 3400 RTP/AVP 98 99
The first parameter to the video for the media name: show the video type.
The second parameter is the port number 3400, showed that UE in the local port 3400 send video stream.
The third parameter RTP/AVP as the transport protocol, said RTP over UDP.
Fourth, five parameters are given. Two kinds of payload type number
a=rtpmap:98 MPV
a=rtpmap:99 H.261
Coding scheme payload Type 98 corresponding to MPV, dynamic distribution.
Payload type 97 corresponding encoding for the H.261, as dynamic allocation.
(3)SETUP
The SETUP message is used to determine the transfer mechanism, establishing a RTSP session. The client can also set up RTSP again after sending a SETUP request to change the transmission parameters for playing streaming media server may agree with these parameters. If you do not agree, will respond "455 Method Not Valid In This State".
The head of the Transport Request field in the data transmission parameters specified acceptable to the client,
The Transport header field in Response contains the server after confirmation of transmission parameters.
If the request does not contain a SessionID, then the server will produce a SessionID.
SETUP rtsp://10.34.3.80/D:/a.264/track1 RTSP/1.0   //Track1 said to set the channel 1.

CSeq: 4

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)   //The client version information

Transport: RTP/AVP;unicast;client_port=60094-60095       //RTP/AVP said the RTP agreement transmission parameters over, UDP, unicast unicast, multicast is used to distinguish. Client_port agreed by the client RTP RTCP port 60094, port 60095

RTSP/1.0 200 OK

CSeq: 4

Date: Tue, Jul 22 2014 02:41:25 GMT

Transport: RTP/AVP;unicast;destination=10.34.3.80;source=10.34.3.80;client_port=60094-60095;server_port=6970-6971 //The server specified by the transmission parameters

Session: 54DAFD56;timeout=65    //SessionID          
From the top of the SETUP session can be seen in the RTP port even said, RTCP for the TCP port adjacent odd port.
Shown above is RTP over UDP. The following for the use of RTP over TCP SETUP dialogue.
SETUP rtsp://10.34.3.80/D:/a.264/track1 RTSP/1.0   //Track1 said to set the channel 1.

CSeq: 4

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)  

Transport: RTP/AVP/TCP;unicast;interleaved= 0-1

RTSP/1.0 200 OK

CSeq: 4

Date: Tue, Jul 22 2014 02:41:25 GMT

Transport:

 RTP/AVP/TCP;interleaved=0-1

Session: 54DAFD56;timeout=65

You can see the Transport SETUP command for the RTP/AVP/TCP field, and many a interleaved=0-1 field. Because the RTP over TCP RTP and RTCP packets are sent to the same TCP port, so use the interleaved value to distinguish whether the RTP or the RTCP package. Interleaved=0 said the RTP packet, RTCP packet interleaved=1.
(4)PLAY
The PLAY method notifies the server according to the mechanism of SETUP specified in the start data transfer. The server will PLAY message from the specified start time range to transmit data, until the end of. The server may be PLAY requests in the queue, after an PLAY request needs to wait before an PLAY request is completed can be implemented.
Range specifies the playback start time. If you receive a message in the specified time, so play immediately began.
Excluding the first Range PLAY request is legitimate, the media began to flow from the position in the world, until the media stream is suspended. If the media flow through PAUSE pause, media contribute at the point of suspension to transmission. If the media stream is playing, so the PLAY request will not work. The client can use this to test whether the server survival.
PLAY rtsp://10.34.3.80/D:/a.264/ RTSP/1.0

CSeq: 5

User-Agent: LibVLC/2.0.7 (LIVE555 Streaming Media v2012.12.18)

Session: 54DAFD56                 //SessionID, Returns the SETUP response

Range: npt=0.000-   /                /The specified start playing time

RTSP/1.0 200 OK

CSeq: 5

Date: Tue, Jul 22 2014 02:41:25 GMT

Range: npt=0.000- 

Session: 54DAFD56

RTP-Info: url=rtsp://10.34.3.80/D:/a.264/track1;seq=10244;rtptime=2423329550  //RTP information


The Url field is a streaming link address corresponds to the RTP parameter, the SEQ field of streaming media first packet sequence number, rtptime for the range domain corresponding to the RTP timestamp
(5)PAUSE
PAUSE message server paused transmission stream transmission. If the request URL specific media stream, then only the media player is suspended. You can specify only pause audio, then play will mute. If the request URL specifies a set of flow, then the transmission of all streams in the group will be suspended. The server may not support PAUSE message. For example, the real-time stream may not support pause. When a server does not support a message, will respond to the client"501 Not Implemented".
PAUSE request may contain a Range head, is used to specify the media stream pause time point, called the point of suspension. The head of the Range must contain a precise value, instead of a time range. If the Range header specifies a time beyond the scope of the PLAY request, the server will return"457 Invalid Range" . If Range is missing, then immediately suspend suspended in received news media stream transmission, and will pause point set to the current playback time.
(6) TEARDOWN
TEARDOWN for the termination of a given URL media streaming, and release the related resources and the media stream.
Three, RTP/RTCP protocol
RTP is the real time transport protocol (Real-Time Transport Protocol) abbreviation. The real time transport protocol for multimedia data flow. Usually based on UDP, can also be based on TCP. Some people will be classified as application layer protocol, also some people will return to its transport layer protocol, which can be. The Rtp protocol provides a timestamp and sequence number. Timestamp sender is arranged in sampling, after the receiver, in accordance with the time stamp are playing. RTP itself only to ensure the transmission of real-time data, and not for in order delivery of data packets for reliable transport mechanism, also do not provide flow and congestion control, it relies on RTCP to provide these services.
The version number (V): 0-1 2b is used to identify the version of RTP used in.
Fill bits (P): 2 1b if the bit is set to 1, the tail of the RTP package with padding bytes added.
Extended (X):3 1b if the bit is set to 1, the tail of the RTP packets with additional extension header.
CSRC counter (CC): the number of CSRC 4-7 4B fixed head following after.
The marked position (M): 8 1b interpretation of the bit by the configuration document.
Load type (PT): type 9-15 7b identifies the RTP load.
Serial number (SN): 16- 31 16b sender sending each a RTP package will be the domain value plus 1, the receiver can be detected by the sequence number to determine whether the RTP packet loss. Note: the initial sequence number value is random.
Time stamp: sampling time of the first byte of the packet in 32 32b. The time stamp is an initial value, and increased with the passage of time. Even if no packets are sent, the time stamp will not increase. The time stamp is removing jitter and synchronize the essential.
SSRC: synchronization source identifier: 32b RTP package source, cannot have the same two SSRC value in the same session of RTP. This field is based on the algorithm of randomly generated certain.
CSRC List: contribution to the source list 0-15, each 32B to all RTP package source new packet identifier for a RTP mixer's contribution.
The RTCP protocol
RTCP is the real time control protocol (Real-Time Control Protocol) abbreviation. RTCP is usually used in conjunction with the RTP, is used to manage the transmission quality in the current process between the exchange of information. During the RTP session, the participants periodically transmitting RTCP packet, RTCP packet contains the number of sent packets, the number of lost packets and other statistical data. The server can use to change the transmission rate of the dynamic information, or even change the payload type. With the use of RTP and RTCP, can effectively and with minimal cost to achieve the best transmission efficiency, very suitable for the transmission of real time flow.
RTSP usually uses RTP protocol to transport real-time streaming, RTP general use of even and odd port, port RTCP using adjacent, namely the RTP port number +1.
In the RTCP communication control, RTCP protocol function is realized by different types of RTCP packages. RTCP is also based on the UDP packet transmission, there are five main types of packets:
1.SR: The sender report, issued by the application sends a RTP datagram or end.
2.RR: The receiving end report, but the application is not accepted by sending a RTP datagram or from the end.
3.SDES: source description, the carrier identification information associated with the session members, such as user name, e-mail, phone etc.
4.BYE: notify other members left, back in the notice will exit the session.
5.APP: defined by the application itself, as an extension to the RTCP protocol.
Version (V) with the RTP Baotou: Department of
Fill (P) with the RTP Baotou: Department of.
The receiving report counter (RC): number of the receiving report of block 5B in the SR package.
Packet type (PT): 8bit SR packet type 200
Length (length) of:SR package to 32bit for 1 units of length minus 1
Synchrotron source (SSRC) synchronization source identifier:SR packet transmission. As with the corresponding RTP package SSRC.
NTP timestamp (Network Time Protocol): SR absolute time packets when. Used to synchronize different flow.
RTP timestamp: corresponding to the NTP timestamp, with the same initial value and the timestamp in the RTP package.
Send's Packet count: From the beginning of the total number of bytes of data to produce effective this time the SR package in the sender transmits data, excluding the head and filling, the sender SSRC, the domain to be cleared.
The SSRC identifier synchronization source n: contained in the report is the statistical information packets received from the source.
Loss rate: the last show from SR or RR packets sent according to the loss rate from the source n sends a RTP packet.
The cumulative loss of data: accept the total number of SSRC_n packets to send SR the time SSRC_n transmission loss of RTP from the beginning.
Extended maximum sequence number received from the maximum sequence number: data from the RTP package SSRC_n received.
Receiving jitter (Interarrival jitter):RTP packet reception time variance estimation.
The last time SR time stamp (Last SR): the latest from NTP timestamp SSRC_n received SR packets in the intermediate 32bit. If you have not received SR packet, is 0.
The last time dependent SR delay (Delay since Last SR): from the last SSRC_n received SR packet to the packet transmission delay.
Audio and video synchronization
Transmission of audio and video stream in two different RTP session, each RTP packet has its own time stamp, while the RTCP package in the NPT field (Network Protocol Time) absolute time saved can be used to set the audio and video are mapped to the same time axis, so as to realize the synchronization of audio and video.
The position of each protocol in TCP/IP
This article will introduce the LIVE555 Foundation.

Thursday, 19 November 2015

setuid() system call

Have you thought what if your application which is running in you want to make system call which need to run as root user then how to make it work ???

May that why you are here !!!!

   #include <stdio.h>
   #include <stdlib.h>
   #include <sys/types.h>
   #include <unistd.h>

   int main()
   {
   setuid(0);
   system("kill 1090");
   return 0;
   }

$ gcc program.c -o program
$ sudo chown root.root program
$ sudo chmod 4755 program
$ ./program

The setuidsetgid, and sticky Permissions

Contributed by Tom Rhodes.
Other than the permissions already discussed, there are three other specific settings that all administrators should know about. They are the setuidsetgid, andsticky permissions.
These settings are important for some UNIX® operations as they provide functionality not normally granted to normal users. To understand them, the difference between the real user ID and effective user ID must be noted.
The real user ID is the UID who owns or starts the process. The effective UID is the user ID the process runs as. As an example, passwd(1) runs with the real user ID when a user changes their password. However, in order to update the password database, the command runs as the effective ID of the root user. This allows users to change their passwords without seeing a Permission Denied error.
The setuid permission may be set by prefixing a permission set with the number four (4) as shown in the following example:
# chmod 4755 suidexample.sh
ref:https://www.freebsd.org/doc/handbook/permissions.html

Tuesday, 13 October 2015

What is processor,assembler and programming language

How does the processor (CPU) work?

You might know that the CPU (Central Processing Unit, or simply processor) is the “brain” of the computer, controlling all other parts of the computer and performing various calculations and operations with data. But how does it achieve that?
Processor is a circuit that is designed to perform single instructions: actually a whole series of them, one by one. The instructions to be executed are stored in some memory, in a PC, it’s the operating memory. Imagine the memory like a large grid of cells. Each cell can store a small number and each cell has its own unique number – address. The processor tells the memory address of a cell and the memory responds with the value (number, but it can represent anything – letters, graphics, sound… everything can be converted to numerical values) stored in the cell. Of course, the processor can tell the memory to store a new number in a given cell as well.
Instructions themselves are basically numbers too: each simple operation is assigned its own unique numeric code. The processor retrieves this number and decides what to do: for example, number 35 will cause the processor to copy data from one memory cell to another, number 48 can tell it to add two numbers together, and number 12 can tell it to perform a simple logical operation called OR.
Which operations are assigned to which numbers is decided by the engineers who design a given processor, or it’s better to say processor architecture: they decide what number codes will be assigned to various operations (and of course, they decide other aspects of the processor, but that’s not relevant now). This set of rules is then called the architecture. This way, manufactures can create various processors that support a given architecture: they can differ in speed, power consumption, and price, but they all understand the same codes as same instructions.
Once the processor completes the action determined by the code (the instruction), it simply requests the following one and repeats the whole process. Sometimes it can also decide to jump to different places in the memory, for example to some subroutine (function) or jump a few cells back to a previous instruction and execute the same sequence again – basically creating a loop. The sequence of numerical codes that form the program is called machine code.

What are instructions and how are they used?

As I already mentioned, instructions are very simple tasks that the processor can perform, each one having its unique code. The circuit that makes up the processor is designed in a way to perform the given operations according to the codes it loads from the memory. The numeric code is often called opcode.
The operations that the instructions perform are usually very simple. Only by writing a sequence of these simple operations, can you make the processor perform a specific task. However, writing a sequence of numeric codes is quite tedious (though that’s how programming was done long ago), so the assembly programming language was created. It assigns opcodes (the numeric code) a symbol – a name that sort of describes what it does.
Given the previous examples, where number 35 makes the processor move data from one memory cell to another, we can assign this instruction name MOV, which is a short for MOVe. Number 48, which is the instruction that adds two numbers together gets the name ADD, and 12, which performs the OR logical operation, gets the name ORL.
The programmer writes a sequence of instructions – simple operations that the processor can perform, using these names, which are much easier to read than just numeric codes. Then he executes a tool named assembler (but often the term “assembler” is used also for the programming language, though technically it means the tool), which will convert these symbols to the appropriate numeric codes that can be executed by the processor.
However, in most cases, the instruction itself isn’t sufficient. For example, if you want to add two numbers together, you obviously need to specify them, the same goes for logical operations, or moving data from a memory cell to another: you need to specify the address of the source and the target cell. This is done by adding the so-called operands to the instruction – simply one or more values (numbers) that will provide additional information for the instruction needed to perform a given operation. The operands are stored in the memory too, along with the instruction opcodes.
For example, if you want to move data from a location with address 1000 to a location 1258, you can write:
MOV 1258, 1000
The first number being the target address and the second the source (in assembly, you usually write the target first, and the source as the second one, it’s quite common). The assembler (the tool that converts the source to the machine code) stores these operands too, so when the processor first loads the instruction opcodes, which will tell it that it must move data from one location to another. Of course, it needs to know from which location to move to what destination, so it will load the operand values from the memory too (they can be at addresses right after the instruction opcodes), and once it has all the necessary data, it will perform the operation.

USB enable and disable

I would show you how to enable and USB driver without taking out it from the port from command Line !!!!

Isn't it amazing :D !!!!!!

First see the usb drive by following command
$ lsusb
$ lsusb -t


To Disable the
echo '2-1' > /sys/bus/usb/drivers/usb/unbind

To Enable it
echo '2-1' > /sys/bus/usb/drivers/usb/bind

'2-1' is process that would enable or disable . I havnt gone in detail through code. You can go through code and let me know if anything new to add

Now protect your PC with from external USB attack. You can write a script that would disable the PC usb port.

Thank you

Keep Hacking !!!


Wednesday, 13 May 2015

ftrace code for userspace


/*
 * file : ne_ftrace_console_write.c
 * desc : a demo program that uses "ftrace" for viewing the kernel control 
 *        path taken when a write(2) is made to "/dev/tty1".
 *
 * notes: code based on snippets from 'Documentation/trace/ftrace.txt' 
 *
 * Siro Mugabi, Copyright (c) nairobi-embedded.org
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation; either version 2 of the License, or
 * (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 * GNU General Public License for more details.
 */
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
#include <string.h>
#include <errno.h>
#include <fcntl.h>
#include <unistd.h>

#define prfmt(fmt) "%s:%d:: " fmt, __func__, __LINE__
#define prerr(fmt, ...) fprintf(stderr, prfmt(fmt), ##__VA_ARGS__);
static const char *find_debugfs(void)
{
    #undef  PATH
    #define PATH     256
    #define _STR(x) #x
    #define STR(x) _STR(x)

    static char debugfs[PATH + 1];
    static int debugfs_found;
    char type[100];
    FILE *fp;

    if (debugfs_found)
        return debugfs;

    if ((fp = fopen("/proc/mounts", "r")) == NULL)
        return NULL;

    while (fscanf(fp, "%*s %" STR(PATH) "s %99s %*s %*d %*d\n",
                    debugfs, type) == 2) {
        if (strcmp(type, "debugfs") == 0)
            break;
    }
    fclose(fp);

    if (strcmp(type, "debugfs") != 0)
        return NULL;

    debugfs_found = 1;
    return debugfs;
}

static int trace_write(int fd, const char *fmt, ...)
{
    va_list ap;
    char buf[256];
    char *pbuf = buf;
    int len, ret = -1;

    if (fd < 0 || !fmt)
        return ret;

    va_start(ap, fmt);
    len = vsnprintf(buf, 256, fmt, ap);
    va_end(ap);

    while (len != 0 && (ret = write(fd, pbuf, len)) != 0) {
        if (ret == -1) {
            if (errno == EINTR)
                continue;
            prerr("%s\n", strerror(errno));
            break;
        }
        len -= ret;
        pbuf += ret;
    }
    return ret;
}

int main(int arg, char **argv)
{
    const char *debugfs;
    char path[PATH + 1];
    int tty_fd, tr_on_fd, marker_fd, tracer_fd;

    debugfs = find_debugfs();
    if (!debugfs) {
        prerr("find_debugfs failed!\n");
        exit(EXIT_FAILURE);
    }

    /* eh-hem, this is permissible for demo code 
     * do not use "system(3)" in production code */
    system("sysctl kernel.ftrace_enabled=1");
    #ifdef TRACE_PID
    {
        pid_t pid = getpid();
        memset(path, 0, PATH + 1);
        snprintf(path, PATH + 1, "echo %d > %s/tracing/set_ftrace_pid", pid,
             debugfs);
        system(path);
    }
    #endif

    /* get "${debugfs}/tracing/current_tracer" file desc. */
    memset(path, 0, PATH + 1);
    strcpy(path, debugfs);
    strcat(path, "/tracing/current_tracer");
    if ((tracer_fd = open(path, O_WRONLY)) < 0) {
        prerr("%s\n", strerror(errno));
        exit(EXIT_FAILURE);
    }

    /* get "${debugfs}/tracing/tracing_on" file desc. */
    memset(path, 0, PATH + 1);
    strcpy(path, debugfs);
    strcat(path, "/tracing/tracing_on");
    if ((tr_on_fd = open(path, O_WRONLY)) < 0) {
        prerr("%s\n", strerror(errno));
        exit(EXIT_FAILURE);
    }

    /* get "${debugfs}/tracing/trace_marker" file desc. */
    memset(path, 0, PATH + 1);
    strcpy(path, debugfs);
    strcat(path, "/tracing/trace_marker");
    if ((marker_fd = open(path, O_WRONLY)) < 0) {
        prerr("%s\n", strerror(errno));
        exit(EXIT_FAILURE);
    }

    /* get "/dev/tty1" file desc. */
    if ((tty_fd = open("/dev/tty1", O_WRONLY)) < 0) {
        prerr("%s\n", strerror(errno));
        exit(EXIT_FAILURE);
    }

    /* clear any previous trace */
    if ((trace_write(tr_on_fd, "0") < 0) ||
            (trace_write(tracer_fd, "nop") < 0)) {
        prerr("trace_write failed!\n");
        exit(EXIT_FAILURE);
    }

    /* "echo function_graph > ${debugfs}/tracing/current_tracer" */
    if (trace_write(tracer_fd, "function_graph") < 0) {
        prerr("trace_write failed!\n");
        exit(EXIT_FAILURE);
    }

    /* "echo 1 > ${debugfs}/tracing/tracing_on" */
    if (trace_write(tr_on_fd, "1") < 0) {
        prerr("trace_write failed!\n");
        exit(EXIT_FAILURE);
    }

    /* finally, perform the trace */
    if (trace_write(marker_fd, "Before console write\n") < 0) {
        prerr("trace_write failed!\n");
        exit(EXIT_FAILURE);
    }

    if (trace_write(tty_fd, "Dunia, vipi?\n") < 0) {
        prerr("trace_write failed!\n");
        exit(EXIT_FAILURE);
    }

    if (trace_write(marker_fd, "After console write\n") < 0) {
        prerr("trace_write failed!\n");
        exit(EXIT_FAILURE);
    }

    /* "echo 0 > ${debugfs}/tracing/tracing_on" */
    if (trace_write(tr_on_fd, "0") < 0) {
        prerr("trace_write failed!\n");
        exit(EXIT_FAILURE);
    }

    /* "lazy" copying the trace output to pwd */
    {
        memset(path, 0, PATH + 1);
        snprintf(path, PATH + 1,
             "cat %s/tracing/trace > ftrace_output.txt", debugfs);
        system(path);
    }

    /* clear the ring buffer */
    if (trace_write(tracer_fd, "nop") < 0) {
        prerr("trace_write failed!\n");
        exit(EXIT_FAILURE);
    }

    #ifdef TRACE_PID
    /* "lazy" reset "set_ftrace_pid" */
    {
        memset(path, 0, PATH + 1);
        snprintf(path, PATH + 1, "echo > %s/tracing/set_ftrace_pid",
             debugfs);
        system(path);
    }
    #endif

    close(tty_fd);
    close(tr_on_fd);
    close(marker_fd);
    close(tracer_fd);
    exit(EXIT_SUCCESS);
}

Wednesday, 29 April 2015

WorkQueue mechanism in Linux

1.Work Queue

The Linux Workqueue mechanism is to simplify the creation of kernel threads. By calling workqueue interface you can create kernel threads. And it may be based on the current number of system CPU to create the number of threads so that the transaction can be parallelized thread. work queue is the kernel simple and effective mechanism, he apparently simplifies the creation of the kernel daemon facilitate the programming of the user.

Work Queue (work queue) is in the form of work after another push executed after the work queue can push the work, referred to a kernel thread to perform, that is, the lower half can be executed in the process context. The most important thing is the work queue is allowed to reschedule or even sleep.


2.Data Structure
We pushed back the implementation of the task called work , describing its data structure work_struct,

struct work_struct {
    atomic_long_t data;      / * Work handler func parameter * /
#define WORK_STRUCT_PENDING 0        /* T if work item pending execution */
#define WORK_STRUCT_STATIC 1        /* static initializer (debugobjects) */
#define WORK_STRUCT_FLAG_MASK (3UL)
#define WORK_STRUCT_WQ_DATA_MASK (~WORK_STRUCT_FLAG_MASK)
    struct list_head entry;        /*Connection work pointer*/
    work_func_t func;              /*work handler*/
#ifdef CONFIG_LOCKDEP
    struct lockdep_map lockdep_map;
#endif
};

The work queue structure to organize a work queue (workqueue), its data structure

struct workqueue_struct {
 struct cpu_workqueue_struct *cpu_wq;
 struct list_head list;
 const char *name;   /*workqueue name*/
 int singlethread;   / * Is not a single thread - threaded our preferred first CPU -0 indicates the default worker thread event*/
 int freezeable;  /* Freeze threads during suspend */
 int rt;
}; 

If it is multi-threaded, Linux CPU number of the current system to create its structure is based on cpu_workqueue_struct,
struct cpu_workqueue_struct {
 spinlock_t lock;
 struct list_head worklist;
 wait_queue_head_t more_work;
 struct work_struct *current_work; 
 struct workqueue_struct *wq;  
 struct task_struct *thread; 
} ____cacheline_aligned;



In the structure of the main maintains a job queue, and kernel threads need to sleep waiting queue, and also maintains a task context, the task_struct.
The relationship between the three is as follows:

Creating work
3.1 Creating job queue
a. create_singlethread_workqueue (name)
Implementation mechanism of the function shown below, the function returns a pointer to a variable of type struct workqueue_struct of the pointer variable points to the memory address inside the function call kzalloc dynamically generated. So the driver call void destroy_workqueue (struct workqueue_struct * wq) In the case of the work queue is no longer used to release the memory address here.


Cwq figure is a per-CPU type of address space. For create_singlethread_workqueue terms, even for multi-CPU system, the kernel is responsible for creating a worker_thread only kernel process. After the kernel process is created, it will first define a graph in wait node, and then check cwq worklist in a body of the loop, if the queue is empty, then the node will be added to the wait cwq the more_work then Sleep in the waiting queue.

Driver calls queue_work (struct workqueue_struct * wq, struct work_struct * work) was added to the wq work node. work will in turn increase the cwq-> worklist points list. queue_work to cwq-> worklist added a work node, while calls to awaken dormant wake_up on cwq-> more_work of worker_thread process. wake_up will first wait on the call autoremove_wake_function function node, then the node is removed from the wait cwq-> more_work in.

worker_thread again is scheduled to begin processing cwq-> worklist for all the nodes work ... when all work is completed node processing, worker_thread again will wait node to cwq-> more_work, then dormant in the waiting queue until Driver call queue_work again ...

b. create_workqueue



With respect create_singlethread_workqueue, create_workqueue wq will also be assigned a job queue, but differs in that for the purposes of multi-CPU system, for each CPU, cwq structure will be created for a per-CPU, which corresponds every cwq, will generate a new worker_thread process. However, when a node to submit work on cwq with queue_work, which CPU calling the function, then the CPU Bianxiang worklist cwq corresponding increase in work on the node.

c. Summary
When the user invokes workqueue initialization interfaces create_workqueue or create_singlethread_workqueue queue for workqueue initialized, the kernel starts the user is assigned a workqueue object and its chain to a global workqueue queue. Then Linux CPU based on the current situation, in order to workqueue object is assigned the same number of CPU cpu_workqueue_struct objects, each cpu_workqueue_struct objects will exist a task queue. Then, Linux cpu_workqueue_struct for each object is assigned a kernel thread, namely the kernel daemon to process each queue tasks. At this point, the user invokes the initialization interface workqueue initialization is complete, return workqueue pointer. After Workqueue initialization is complete, the task run context to build up, but not yet perform specific tasks, therefore, need to define specific work_struct object. Then work_struct added to the job queue, Linux daemon wakes up to handle the task.

 Workqueue kernel implementation of the principles described above can be described as follows:




3.2 Creating work
To use the job queue, the first thing to do is to create some post-needed push to complete the work. The structure can be built statically compiled by DECLARE_WORK:
DECLARE_WORK (name, void (* func) (void *), void * data);
This will create a file called statically name, pending function func, parameter data of work_struct structure.
Similarly, you can create a pointer at runtime by a work:
INIT_WORK (structwork_struct * work, woid (* func) (void *), void * data);


4. Scheduling
a. schedule_work

In most cases, you do not need to create their own job queue, but only defines the work, the work attached to the core structure predefined event scheduler work queue defines a static global amount in kernel / workqueue.c in Work Queue static struct workqueue_struct * keventd_wq; default worker thread is called events / n, where n is the number of processors, each processor corresponds to a thread. For example, single-processor systems only events / 0 of such a thread. The dual-processor system will be more of an events / 1 thread.
Scheduling work structure, work structure will add to the overall work of the event queue keventd_wq, called queue_work common module. Foreign shielded keventd_wq interface, users do not know this parameter is equivalent to using the default parameters. keventd_wq by the kernel to maintain their own, create, destroy. Such work will soon be scheduled once the worker thread in the processor on which it is awakened, it will be executed.

. b schedule_delayed_work (& work, delay);
Sometimes I do not want to work will be executed immediately, but want it to execute again after some delay. In this case, and you can also use the timer to delay the scheduling, after the expiration of a default timer callback registration work. After delay delay, by the timer wake up, will add to the work queue wq work in.

Work is not the priority queue, essentially as a FIFO manner.

Example

#include <linux/module.h>
#include <linux/init.h>
#include <linux/workqueue.h>
static struct workqueue_struct *queue=NULL;
static struct work_struct   work;
staticvoid work_handler(struct work_struct *data)
{
       printk(KERN_ALERT"work handler function.\n");
}
static int __init test_init(void)
{
      queue=create_singlethread_workqueue("hello world");
      if (!queue)
            goto err;
       INIT_WORK(&work,work_handler);
       schedule_work(&work);
      return0;
err:
      return-1;
}
static   void __exit test_exit(void)
{
       destroy_workqueue(queue);
}
MODULE_LICENSE("GPL");
module_init(test_init);
module_exit(test_exit);

The longstanding task queue interface was removed in 2.5.41; in its place is a new "workqueue" mechanism. Workqueues are very similar to task queues, but there are some important differences. Among other things, each workqueue has one or more dedicated worker threads (one per CPU, by default) associated with it. So all tasks running out of workqueues have a process context, and can thus sleep. Note that access to user space is not possible from code running out of a workqueue; there simply is no user space to access. Drivers can create their own work queues - with their own worker threads - but there is a default queue (for each processor) provided by the kernel that will work in most situations.
Workqueues are created with one of:

    struct workqueue_struct *create_workqueue(const char *name);
    struct workqueue_struct *create_singlethread_workqueue(const char *name);

A workqueue created with create_workqueue() will have one worker thread for each CPU on the system;create_singlethread_workqueue(), instead, creates a workqueue with a single worker process. The name of the queue is limited to ten characters; it is only used for generating the "command" for the kernel thread(s) (which can be seen in ps or top).
Tasks to be run out of a workqueue need to be packaged in a struct work_struct structure. This structure may be declared and initialized at compile time as follows:

    DECLARE_WORK(name, void (*function)(void *), void *data);

Here, name is the name of the resulting work_struct structure, function is the function to call to execute the work, and data is a pointer to pass to that function.
To set up a work_struct structure at run time, instead, use the following two macros:

    INIT_WORK(struct work_struct *work, 
              void (*function)(void *), void *data);
    PREPARE_WORK(struct work_struct *work, 
                 void (*function)(void *), void *data);

The difference between the two is that INIT_WORK initializes the linked list pointers within the work_struct structure, while PREPARE_WORKchanges only the function and data pointers. INIT_WORK must be used at least once before queueing the work_struct structure, but shouldnot be used if the work_struct might already be in a workqueue.
Actually queueing a job to be executed is simple:

    int queue_work(struct workqueue_struct *queue, 
                   struct work_struct *work);
    int queue_delayed_work(struct workqueue_struct *queue, 
                    struct work_struct *work,
                           unsigned long delay);

The second form of the call ensures that a minimum delay (in jiffies) passes before the work is actually executed. The return value from both functions is nonzero if the work_struct was actually added to queue (otherwise, it may have already been there and will not be added a second time).
Entries in workqueues are executed at some undefined time in the future, when the associated worker thread is scheduled to run (and after the delay period, if any, has passed). If it is necessary to cancel a delayed task, you can do so with:

    int cancel_delayed_work(struct work_struct *work);

Note that this workqueue entry could actually be executing when cancel_delayed_work() returns; all this function will do is keep it from starting after the call.
To ensure that none of your workqueue entries are running, call:

    void flush_workqueue(struct workqueue_struct *queue);

This would be a good thing to do, for example, in a device driver shutdown routine. Note that if the queue contains work with long delays this call could take a long time to complete. This function will not (as of 2.5.68) wait for any work entries submitted after the call was first made; you should ensure that, for example, any outstanding work queue entries will not resubmit themselves. You should also cancel any delayed entries (with cancel_delayed_work()) first if need be.
Work queues can be destroyed with:

    void destroy_workqueue(struct workqueue_struct *queue);

This operation will flush the queue, then delete it.
Finally, for tasks that do not justify their own workqueue, a "default" work queue (called "events") is defined. work_struct structures can be added to this queue with:

    int schedule_work(struct work_struct *work);
    int schedule_delayed_work(struct work_struct *work, unsigned long delay);

Most users of workqueues can probably use the predefined queue, but one should bear in mind that it is a shared resource. Long delays in the worker function will slow down other users of the queue, and should be avoided. There is a flush_scheduled_work() function which will wait for everything on this queue to be executed. If your module uses the default queue, it should almost certainly callflush_scheduled_work() before allowing itself to be unloaded.


With first call, the work is scheduled immediately and is run as soon as the events worker thread on the current processor wakes up.
While with second call, the work_struct represented by &work will not execute for at least delay timer ticks into the future.


One final note: schedule_work()schedule_delayed_work() and flush_scheduled_work() are exported to any modules which wish to use them. The other functions (for working with separate workqueues) are exported to GPL-licensed modules only.



Reference
https://lwn.net/Articles/23634/