Thursday, March 14, 2013

MVT & What is Compaction ?

To avoid the main drawbacks, which are define as "memory consuming", and "limitation of processes can be run at a time even though have relevant memory" in MFT(Multiprogramming with Fixed number of Tasks), MVT (Multiprogramming with Variable number of Tasks) mechanism was introduced stage.

in here we are mainly focusing on external fragmentation rather than internal fragmentation. 

consider about the above job allocation table following diagram show the steps of MVT

Here compaction can be consider as some kind of a Defragmentation process which happens automatically in main memory.Compaction is high cost event because CPU needs to stop all the other works in order to do the compaction.  

some body can mislead MVT is not using the internal fragmentation. it is using internal fragmentation in very special occasions.

For example, think that we have 50 byte fragment after compaction. if we use it as external fragment we cannot use it for any process. On the other hand for keep track about the free segments we need to use entries. in this kind of a situation that can be needed space more than the free space on the external fragment there for we are using internal fragmentation. but ideally we consider MVT only use external fragments. 

(ref: video lecture of prof: P.K Bisvas)       

Sunday, March 10, 2013

Can't it be changed

In a normal operating system we can see a process state diagram as follow.
in this algorithm i have seen a special scenario which i want to express as a drawback. After we activated the Process in a situation it needs some user input or an out put of a another process as a input it needs to wait on the" wait state" mean while other processes are executing. After having the needed input, process need change its state to activate. but in nowadays system this is not a direct activity. Again that process need to go to "ready state" queue and wait until chance comes to it. 

According to my opinion this is not an fair activity. why it can't be changed as follow.  
Here we can assign a priority value for each processes which needs to queue on waiting state.Priority value need to have two values as high and low. After completed the relevant input need process need to change to the "check state". At this state priority value need to be checked. If it is high, translation of "Ready state" queue need to be paused and transform "check state" process to active state. If it is normal it will be queued under the "ready state" as normally.

Algorithm :

Transform New State to ready State
Activate the Process
     If need to not be perform "wait State"
             continue until halted state
    else
            put to the waiting state
           check the priority value
           if high
               pause the "Ready state" transformation
              change "check state" to "Active state"
          else(low)
              change the "check" state to "ready state"  

I don't whether this idea is acceptable or not by the OS architectures. But this was the idea that pop up suddenly on my mind while listening to the process state diagram lecture at university.  

Saturday, March 9, 2013

Crossover Algorithm

Crossover is an important and attractive biological function which is happen in natural environment with respect to animal's reproduction procedure. This will case to generate new child who has genes collection of parents plus totally new or extended features. Mutation is the process which specialize the uniqueness while the crossover process. 

Core of the genetic algorithm is containing this concept. but how to do the crossover in genetic algorithm. this is my way to do it.

assumption --> crossover probability = 0.7
Select float "n" below 1.00.
if  n>0.7 return any parent as child
     else do following 
       Select 2 parents randomly.
       Decode them to 10 bit binary codes.
 Eg:     parent A : 1  1  0  0  1  0  1  0  0  1
           parent B : 1  0  1  0  0  1  0  1  0  1

      loop "x" from 0 to length of the bit string (10) select the couple of bits from index "x" of each string.

  Eg:   zeroth element of parent A and zeroth element of parent B (1 1
     
     If both are same use the same as "x" index element of the child 
              else randomly select 0 or 1 as the "x" index element . do this until end of the loop 

  Eg:    parent A : 1  1  0  0  1  0  1  0  0  1
           parent B : 1  0  1  0   1  0  1  0  1
           Child      : 1  0  1  0  1  1  0  0  0  1

        Encode the child bit string 
        Return the children
   
This is the lisp codes that is following above algorithm.

(defun crossover(xA xB);;crossover and produce child
  (let ((b-xA (int-to-binary xA)) (b-xB (int-to-binary xB)) 
  (child (int-to-binary nil)) (random-gene '(#\0 #\1)))
  (loop for y from 0 to (- (length b-xA) 1) do
  (if (char/= (aref b-xA y) (aref b-xB y)) 
  (progn (setf (char child y) (random-item random-gene)))
  (setf (char child y) (char b-xB y))))
(parse-integer child :radix 2)))

Friday, March 8, 2013

Truth of stupidity by Einstein



At the time i found this  grate quote, i thought that this is the thing first we need to learn. we cant do all the things. but we are special for one criteria even for stupidity.

Thursday, March 7, 2013

Agile for Implementations

I think we need to pay more attention on input and output of a program rather than thinking the technology that we are going to use for implementing. we need to have clear idea about the inputs we can make and output we need to get.

Then we can select a programming language, special algorithms (if necessary) and a platform in order to define the output generating procedure.

major wrong thing we do is try to minimize the codes or reduce the time consuming at the beginning. This may case for headache and errors which we cannot identify and correct easily. Yes some times we can do it and reduce the codes. but after complete the program we will fed up. and also probability of identifying the correct super technology is low within this kind of a process rather than concentrating on it latter.

there for i think that its better implement a program step by step.first we can consider the input and out put. then we need to implement methods to get the out put, without thinking about the efficiency or flexibility of the program. i would like to call this as "fool programming method". then some how we have the output. As the next step we can review the program and do the changes for the coding. this can be do several times. first we can remove the most foolish things we did. and next bit lower foolish things.

EG step 1 : Input --> any couple of integers x,y
                  Output --> (x*y)/(x+y)
 
      step 2 : "fool programming method"
                 
                     public void (int x,int y)
                           {
                             int temp1= x+y;
                             int temp2=x*y;
                            double answer = temp2/temp1;
                            system.out.println(answer);
                           }
       Step 3 : review and correct (can be do several times)
                          1.  public void (int x,int y)
                                 {
                                     double answer = (x*y)/(x+y);
                                     system.out.println(answer);
                                 }


                          2.  public void (int x,int y)
                                 {
                                     system.out.println((x*y)/(x+y));
                                 }


                          3.  public double (int x,int y)
                                 {
                                     return ((x*y)/(x+y));
                                 }

I thing this can be more effective than traditional "all in first time" method. this is some what same to agile way using for big software system designing process management. 


Avoid depending on (traditional) Algorithms :-O

Algorithms are important and interesting Part of computer science. Main aspect to reduce the time and resource consuming of the computer.

But i have seen several cons of this concept. Think that you need to search a data within a big database. ah... searching algorithms can reduce the time consuming. now i'm searching what is the best. linear search? m..... no how about the binary search ?.. woow there is a new technology GENETIC ALGORITHM Word is big,Technology is high. i used it. 

But the thing is  we don't even think about a new algorithm we just use it. i think its better to avoid the traditional algorithms. we need to search new concepts. we don't need to make a frames for coding such as "algorithm need to be very fast resource saving one". think that we just coding according to our algorithm. it is very fast than traditional one But resource killer, with compare to the system resources we have. What is the thing we do is neglect it. 

Think, about a situation we have enough compatible resources for that algorithm. then it will be the most powerful algorithm for that scenario.there for we can not measure the algorithm according to its resource using levels.

Some times we neglect some algorithms in order to time consuming but we need to think about the accuracy. For some scenarios we have enough time but accuracy need to be 100%. Even though it has time problem we can trust its accuracy. 

I think its better to plush "time and cost reducing concept" from our minds. All the algorithms has pros and cons. but these are not universal. it depend on the scenario that we are working. According to my opinion its better to use our own algorithms for programming. Make the traditional popular algorithms as referens but don't depend on it.