5

I'v got some problem to understand the difference between Logarithmic(Lcc) and Uniform(Ucc) cost criteria and also how to use it in calculations.

Could someone please explain the difference between the two and perhaps show how to calculate the complexity for a problem like A+B*C

(Yes this is part of an assignment =) )

Thx for any help!

/Marthin

skaffman
  • 398,947
  • 96
  • 818
  • 769
Marthin
  • 6,413
  • 15
  • 58
  • 95

4 Answers4

8

Uniform Cost Criteria assigns a constant cost to every machine operation regardless of the number of bits involved WHILE Logarithm Cost Criteria assigns a cost to every machine operation proportional to the number of bits involved

Fareed
  • 96
  • 1
  • 2
3

Problem size influence complexity Since complexity depends on the size of the problem we define complexity to be a function of problem size Definition: Let T(n) denote the complexity for an algorithm that is applied to a problem of size n. The size (n) of a problem instance (I) is the number of (binary) bits used to represent the instance. So problem size is the length of the binary description of the instance. This is called Logarithmic cost criteria

Unit Cost Criteria If you assume that: - every computer instruction takes one time unit, - every register is one storage unit - and that a number always fits in a register then you can use the number of inputs as problem size since the length of input (in bits) will be a constant times the number of inputs.

Anon
  • 31
  • 2
2

Uniform cost criteria assume that every instruction takes a single unit of time and that every register requires a single unit of space.

Logarithmic cost criteria assume that every instruction takes a logarithmic number of time units (with respect to the length of the operands) and that every register requires a logarithmic number of units of space.

In simpler terms, what this means is that uniform cost criteria count the number of operations, and logarithmic cost criteria count the number of bit operations.

For example, suppose we have an 8-bit adder.

If we're using uniform cost criteria to analyze the run-time of the adder, we would say that addition takes a single time unit; i.e., T(N)=1.

If we're using logarithmic cost criteria to analyze the run-time of the adder, we would say that addition takes lg⁡n time units; i.e., T(N)=lgn, where n is the worst case number you would have to add in terms of time complexity (in this example, n would be 256). Thus, T(N)=8.

More specifically, say we're adding 256 to 32. To perform the addition, we have to add the binary bits together in the 1s column, the 2s column, the 4s column, etc (columns meaning the bit locations). The number 256 requires 8 bits. This is where logarithms come into our analysis. lg256=8. So to add the two numbers, we have to perform addition on 8 columns. Logarithmic cost criteria say that each of these 8 addition calculations takes a single unit of time. Uniform cost criteria say that the entire set of 8 addition calculations takes a single unit of time.

Similar analysis can be made in terms of space as well. Registers either take up a constant amount of space (under uniform cost criteria) or a logarithmic amount of space (under uniform cost criteria).

AOquaish
  • 53
  • 1
  • 3
  • 8
-2

I think you should do some research on Big O notation... http://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions

If there is a part of the description you find difficult edit your question.

Charles Beattie
  • 5,739
  • 1
  • 29
  • 32
  • I know most of the parts regarding Big O notation. But all that is related to the Logarithmic cost criteria is it not? The link doesn't tell me anything about Uniform cost and how to use it in calculations. I'v search around and there doesn't seem to be that much information about this specific question. – Marthin May 21 '10 at 16:28