Description
Program the following two assignments in C. You may use the C programming
environment in computer science department UNIX/Linux server, which is
“xlogin.cs.ecu.edu”. You could log in the server using:
$ ssh username@xlogin.cs.ecu.edu If you are off ECU campus, you need connect ECU
via VPN, check the link on how to
connect via VPN: http://www.ecu.edu/cs-itcs/connect/studentVPN.cfm). Link of a tutorial
for UNIX and C: http://heather.cs.ucdavis.edu/~matloff/unix.html
Part 1: UNIX Processes
Use the thin clients in Austin Room No. 208 or the PCs in Room No. 207 or your own
computer to get onto xlogin.cs.ecu.edu and work on your programming assignments.
The xlogin server runs SUSE LINUX. Create a subdirectory called cs4110 in your home
directory. Create a subdirectory called assign1 in your cs4110 directory. Use that
subdirectory to store all the files concerning this assignment and nothing else. You need
to follow these general guidelines for all your future assignments as well. Name the two
source files worker.c and coordinator.c. The code for worker process should be
compiled separately and its executable be called worker. It should be possible to execute
the worker program independently of the coordinator. The executable for the coordinator
process should be called coordinator. If you are not using makefile, please include the
name of the compiler you are using and any special options needed as comments to your
source code.
Part 2: POSIX threads
Use the thin clients in Austin Room No. 208 or the PCs in Room No. 207 or your own
computer to get onto xlogin.cs.ecu.edu and work on your programming assignments.
The xlogin server runs SUSE LINUX. Create a subdirectory called cs4110 in your home
directory. Create a subdirectory called assign2 in your cs4110 directory. Use that
subdirectory to store all the files concerning this assignment and nothing else. You need
to follow these general guidelines for all your future assignments as well. Name your
source file as prefix.c If you are not using makefile, please include the name of the
compiler you are using and any special options needed as comments to your source code.
Note that the Sum array now has the desired values in it. Note that in each step, each
position of the array can be independently computed. Hence, if we use one thread for
each position of the array, each step can be computed in unit time. As the number of
steps is ⌈log2 n⌉, we have obtained an order of magnitude speed up (O(log n) as
compared to the O(n) complexity for the sequential algorithm).
Needless to say, care must be taken to ensure that the threads are synchronized properly.
Basically, each thread can move onto the next step only after all the threads have
completed the previous step. This type of synchronization is called barrier
synchronization. The barrier synchronization comes up in very many practical
applications that Pthreads provide direct calls for that. I will describe those calls in this
handout. However, we can also implement it ourselves, using mutex lock and condition
variables of Pthreads.