p1 questão 1

p1 questão 1

por José Coelho de Pina -
Número de respostas: 0

QUESTÃO 1 Diga o nome de um algoritmo de ordenação que consome tempo ótimo e espaço extra constante. (Resposta deve ter até cinco palavras.)

O consumo de tempo ótimo para algoritmos de ordenação (baseados e comparação) é $\textup{O}(n \lg n)$, onde n é o número de itens no vetor a ser ordenado.

Espaço extra constante (ou in-place ou in-situ) significa que o espaço usado para rascunho pelo algoritmo não pode ser uma função do número de elementos do vetor a ser ordenado.

O Mergesort consome tempo proporcional a nlgn, mas usa espaço extra proporcional a n. É fácil usar espaço extra n/2 e dá trabalho usar espaço extra constante. É bem mais complicado intercalar dois vetores ordenados utilizando espaço extra constante e matendo a intercalação em tempo proporcional a n A simple linear-time algorithm for in situ merging.

O Quicksort consome tempo quadrático no pior caso e espaço extra proporcional a n no pior caso para a pilha da recursão (pares lo e hi dos intervalos que ainda precisam ser ordenados). Com não muito esforço podemos fazer com que o espaço extra seja proporcional a lgn no pior caso.

O Heapsort consome tempo $\textup{O}(n \lg n)$ e espaço extra constante. Esse é o cara.

Sobre o Quicksort, é legal ver a implementação dele na glibc. Veja The GNU C Library (glibc): The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. These APIs include such foundational facilities as open, read, write, malloc, printf, getaddrinfo, dlopen, pthread_create, crypt, login, exit and more.

A última versão dessa biblioteca foi em 2018-02-01. Coloquei o qsort.c junto com as notas de aula. Copio aqui alguns comentário extraídos desse arquivo.

Comentários?


/* Copyright (C) 1991-2018 Free Software Foundation, Inc.
   [...]
*/


/* If you consider tuning this algorithm, you should consult first:
   Engineering a sort function; Jon Bentley and M. Douglas McIlroy;
   Software - Practice and Experience; Vol. 23 (11), 1249-1265, 1993.  */

#include <alloca.h>
#include <limits.h>
#include <stdlib.h>
#include <string.h>

/* Byte-wise swap two items of size SIZE. */
[...]
 
/* Discontinue quicksort algorithm when partition gets below this size.
   This particular magic number was chosen to work best on a Sun 4/260. */
#define MAX_THRESH  4

/* Stack node declarations used to store unfulfilled partition obligations. */
typedef struct
  {
    char *lo;
    char *hi;
  } stack_node;

/* The next 4 #defines implement a very fast in-line stack abstraction. */
/* The stack needs log (total_elements) entries (we could even subtract
   log(MAX_THRESH)).  Since total_elements has type size_t, we get as
   upper bound for log (total_elements):
   bits per byte (CHAR_BIT) * sizeof(size_t).  */
#define 

STACK_SIZE

	(CHAR_BIT * sizeof(size_t))
#define PUSH(low, high)	((void) ((top->lo = (low)), (top->hi = (high)), ++top))
#define	POP(low, high)	((void) (--top, (low = top->lo), (high = top->hi)))
#define	

STACK_NOT_EMPTY

	(stack < top)


/* Order size using quicksort.  This implementation incorporates
   four optimizations discussed in Sedgewick:

   1. Non-recursive, using an explicit stack of pointer that store the
      next array partition to sort.  To save time, this maximum amount
      of space required to store an array of SIZE_MAX is allocated on the
      stack.  Assuming a 32-bit (64 bit) integer for size_t, this needs
      only 32 * sizeof(stack_node) == 256 bytes (for 64 bit: 1024 bytes).
      Pretty cheap, actually.

   2. Chose the pivot element using a median-of-three decision tree.
      This reduces the probability of selecting a bad pivot value and
      eliminates certain extraneous comparisons.

   3. Only quicksorts TOTAL_ELEMS / MAX_THRESH partitions, leaving
      insertion sort to order the MAX_THRESH items within each partition.
      This is a big win, since insertion sort is faster for small, mostly
      sorted array segments.

   4. The larger of the two sub-partitions is always pushed onto the
      stack first, with the algorithm then concentrating on the
      smaller partition.  This *guarantees* no more than log (total_elems)
      stack size is needed (actually O(1) in this case)!  */

void
_quicksort (void *const pbase, size_t total_elems, size_t size,
	    __compar_d_fn_t cmp, void *arg)
{
     [...]

      while (STACK_NOT_EMPTY)
        {
          char *left_ptr;
          char *right_ptr;

	  /* Select median value from among LO, MID, and HI. Rearrange
	     LO and HI so the three values are sorted. This lowers the
	     probability of picking a pathological pivot value and
	     skips a comparison for both the LEFT_PTR and RIGHT_PTR in
	     the while loops. */

          [...]

	  /* Here's the famous ``collapse the walls'' section of quicksort.
	     Gotta like those tight inner loops!  They are the main reason
	     that this algorithm runs much faster than others. */
          [...]
 

          /* Set up pointers for next iteration.  First determine whether
             left and right partitions are below the threshold size.  If so,
             ignore one or both.  Otherwise, push the larger partition's
             bounds on the stack and continue sorting the smaller one. */

           [...]

  /* Once the BASE_PTR array is partially sorted by quicksort the rest
     is completely sorted using insertion sort, since this is efficient
     for partitions below MAX_THRESH size. BASE_PTR points to the beginning
     of the array to sort, and END_PTR points at the very last element in
     the array (*not* one beyond it!). */

    /* Insertion sort, running from left-hand-side up to right-hand-side.  */

    [...]
              
}