Showing: 1 - 1 of 1 RESULTS

We are aware of all the challenges faced by students when tackling class assignments. You can have an assignment that is too complicated or an assignment that needs to be completed sooner than you can manage. You also need to have time for a social life and this might not be possible due to school work.

The good news is that course help online is here to take care of all this needs to ensure all your assignments are completed on time and you have time for other important activities. We also understand you have a number of subjects to learn and this might make it hard for you to take care of all the assignments. You are expected to do a thorough research for each assignment to earn yourself a good grade even with the limited time you have. This calls upon the need to employ a professional writer.

When you employ one of our expert writers, you can be sure to have all your assignments completed on time. All your assignment deadlines will be met plus you will have an original, non-plagiarized and error free paper. With our Achiever Papers' services, you are assured of a completely original and error free paper written exclusively for your specified needs, instructions and requirements.

All our papers are original as they are all written from scratch. We also do not re-use any of the papers we write for our customers. With this guarantee feel comfortable to message us or chat with our online agents who are available 24hours a day and 7 days a week be it on a weekend or on a holiday.

As a busy student, you might end up forgetting some of the assignments assigned to you until a night or a day before they are due. This might be very stressing due to inadequate time to do a thorough research to come up with a quality paper. Achiever Papers is here to save you from all this stress. Let our professional writers handle your assignments and submit them to you no matter how close the deadline seems to be.

This will protect you from all the pressure that comes along with assignments. You are assured of a high quality assignment that is error free and delivery will be done on time. We have a reliable team that is always available and determined to help all our clients by improving their grades. We are reliable and trusted among all our clients and thus you can entrust your academic work on us. For any academic help you need, feel free to talk to our team for assistance and you will never regret your decision to work with us.

You can entrust all your academic work to course help online for original and high quality papers submitted on time. We have worked with thousands of students from all over the world.

Most of our clients are satisfied with the quality of services offered to them and we have received positive feedback from our clients.

We have an essay service that includes plagiarism check and proofreading which is done within your assignment deadline with us. This ensures all instructions have been followed and the work submitted is original and non-plagiarized. We offer assignment help in more than 80 courses. We are also able to handle any complex paper in any course as we have employed professional writers who are specialized in different fields of study.

From their experience, they are able to work on the most difficult assignments. The following are some of the course we offer assignment help in. In case you cannot find your course of study on the list above you can search it on the order form or chat with one of our online agents for assistance. We will take care of all your assignment needs We are a leading online assignment help service provider.

Place an Order. Calculate your essay price. Type of paper. Academic level. Pages words. Read more. Plagiarism-free papers To ensure that all the papers we send to our clients are plagiarism free, they are all passed through a plagiarism detecting software. Calculate the price of your order Type of paper needed:. Pages: words. You will get a personal manager and a discount. Academic level:. We'll send you the first draft for approval by September 11, at AM.

Total price:. What advantages do you get from our Achiever Papers' services? All our academic papers are written from scratch All our clients are privileged to have all their academic papers written from scratch. We do not offer pre-written essays All our essays and assignments are written from scratch and are not connected to any essay database. Urgent orders are delivered on time Do you have an urgent order that you need delivered but have no idea on how to do it?

We provide quality assignment help in any format We have writers who are well trained and experienced in different writing and referencing formats. Order a custom-written paper of high quality. Order Now or Free Inquiry. How do we ensure our clients are satisfied with our essay writing services? You can have the privilege of paying part by part for long orders thus you can enjoy flexible pricing.

We also give discounts for returned customers are we have returned customer discounts. We also give our clients the privilege of keeping track of the progress of their assignments. You can keep track of all your in-progress assignments. Having many years of experience, we are aware of many things as we have practiced a lot over the time and thus we are able to satisfy our customer needs.

We offer charts and PowerPoint slides for visual papers to our clients. We have professional editors who pass through completed assignments to ensure all instructions were followed.

They also ensure all assignments are error free. We also offer free revisions to our clients for assignments delivered. The free revision is offered within 7 days after the assignment has been delivered. We offer free revision until our client is satisfied with the work delivered.

You are guaranteed of confidentiality and authenticity By using our website, you can be sure to have your personal information secured. Our sample essays Expository Essay. Paper title: Online Education. Conciseness is a clear advantage of highlevel languages over assembly language.

The final advantage is that programming languages allow programs to be independent of the computer on which they were developed, since compilers and assemblers can translate high-level language programs to the binary instructions of any computer. These three advantages are so strong that today little programming is done in assembly language. The underlying hardware in any computer performs the same basic functions: inputting data, outputting data, processing data, and storing data.

How these functions are performed is the primary topic of this book, and subsequent chapters deal with different parts of these four tasks. When we come to an important point in this book, a point so significant that we hope you will remember it forever, we emphasize it by identifying it as a Big Picture item.

We have about a dozen Big Pictures in this book, the first being the five components of a computer that perform the tasks of inputting, outputting, processing, and storing data. Two key components of computers are input devices, such as the microphone, and output devices, such as the speaker.

As the names suggest, input feeds the 17 1. Some devices, such as wireless networks, provide both input and output to the computer.

The five classic components of a computer are input, output, memory, datapath, and control, with the last two sometimes combined and called the processor.

This organization is independent of hardware technology: you can place every piece of every computer, past and present, into one of these five categories. To help you keep all this in perspective, the five components of a computer are shown on the front page of each of the following chapters, with the portion of interest to that chapter highlighted.

The processor gets instructions and data from memory. Input writes data to memory, and output reads data from memory. Control sends the signals that determine the operations of the datapath, memory, input, and output.

Screens are composed of hundreds of thousands to millions of pixels, organized in a matrix. Through computer displays I have landed an airplane on the deck of a moving carrier, observed a nuclear particle hit a potential well, flown in a rocket at nearly the speed of light and watched a computer reveal its innermost workings.

Most personal mobile devices use liquid crystal displays LCDs to get a thin, low-power display. The LCD is not the source of light; instead, it controls the transmission of light.

A typical LCD includes rod-shaped molecules in a liquid that form a twisting helix that bends light entering the display, from either a light source behind the display or less often from reflected light. The rods straighten out when a current is applied and no longer bend the light. Since the liquid crystal material is between two screens polarized at 90 degrees, the light cannot pass through unless it is bent. Today, most LCDs use an active matrix that has a tiny transistor switch at each pixel to control current precisely and make sharper images.

A red-green-blue mask associated with each dot on the display determines the intensity of the threecolor components in the final image; in a color active matrix LCD, there are three transistor switches at each point. The image is composed of a matrix of picture elements, or pixels, which can be represented as a matrix of bits, called a bit map.

A color display might use 8 bits for each of the three colors red, blue, and greenfor 24 bits per pixel, permitting millions of different colors to be displayed. The computer hardware support for graphics consists mainly of a raster refresh buffer, or frame buffer, to store the bit map. The image to be represented onscreen is stored in the frame buffer, and the bit pattern per pixel is read out to the graphics display at the refresh rate.

The goal of the bit map is to represent faithfully what is on the screen. The challenges in graphics systems arise because the human eye is very good at detecting even subtle changes on the screen.

Pixel X0, Y0 contains the bit patternwhich is a lighter shade on the screen than the bit pattern in pixel X1, Y1. While there are a variety of ways to implement a touch screen, many tablets today use capacitive sensing.

Since people are electrical conductors, if an insulator like glass is covered with a transparent conductor, touching distorts the electrostatic field of the screen, which results in a change in capacitance. This technology can allow multiple touches simultaneously, which recognizes gestures that can lead to attractive user interfaces. Opening the Box Figure 1. At the left is the capacitive multitouch screen and LCD display.

Next to it is the battery. To the far right is the metal frame that attaches the LCD to the back of the iPhone. The small components in the center are what we think of as the computer; they are not simple rectangles to fit compactly inside the case next to the battery. Courtesy TechIngishts, www. A device combining dozens to millions of transistors. The datapath, control, and memory are a tiny portion of the components. The small rectangles in Figure 1. The A12 package seen in the middle of in Figure 1.

The processor is the active part of the computer, following the instructions of a program to the letter. Occasionally, people call the processor the CPU, for the more bureaucratic-sounding central processor unit. Descending even lower into the hardware, Figure 1. The processor logically comprises two main components: datapath and control, the respective brawn and brain of the processor.

The datapath performs the arithmetic operations, and control tells the datapath, processor. The large integrated circuit in the middle is the Apple A12 chip, which contains two large and four small ARM processor cores that run at 2. A similar-sized chip on a symmetric board that attaches to the back is a 64 GiB flash memory chip for nonvolatile storage. The other chips on the board include the power management integrated controller and audio amplifier chips.

Chapter 4 explains the datapath and control for a higherperformance design. The memory is where the programs are kept when they are running; it also contains the data needed by running programs. The memory is a DRAM chip. DRAM stands for dynamic random-access memory. DRAMs are used together to contain the instructions and data of a program. In contrast to sequential-access memory, such as magnetic tapes, the RAM portion of the term DRAM means that memory accesses take basically the same amount of time no matter what portion of memory is read.

Descending into the depths of any component of the hardware reveals insights into the computer. Inside the processor is another type of memory— cache memory. Cache memory consists of small, fast memory that acts as a DRAM buffer.

The nontechnical definition of cache is a safe place for hiding things. Cache is built using a different memory technology, static randomaccess memory SRAM. As mentioned above, one of the great ideas to improve design is abstraction. One of the most important abstractions is the interface between the hardware and the lowest-level software.

Software communicates to hardware via a vocabulary. The words of the vocabulary are called instructions, and the vocabulary iteself is called the instruction set architecture, or simply architecture, of a computer. The combination of the basic instruction set and the operating system interface provided for application programmers is called the application binary interface ABI.

An instruction set architecture allows computer designers to talk about functions independently from the hardware that performs them. For example, we can talk about the functions of a digital clock keeping time, displaying the time, setting the alarm separately from the clock hardware quartz crystal, LED displays, plastic buttons.

Computer designers distinguish architecture from an implementation of an architecture along the same lines: an implementation is hardware that obeys the architecture abstraction. These ideas bring us to another Big Picture. It defines a standard for binary portability across computers. The size of chip is 8. It has two identical ARM processors or cores in the lower middle of the chip, four small cores on the lower right of the chip, a graphics processing unit GPU on the far right see Section 6.

In the middle are second-level cache memory L2 banks for the big and small cores see Chapter 5. Courtesy TechInsights, www. One key interface between the levels of abstraction is the instruction set architecture—the interface between the hardware and low-level software. This abstract interface enables many implementations of varying cost and performance to run identical software.

If we were to lose power to the computer, however, everything would be lost because the memory inside the computer is volatile—that is, when it loses power, it forgets. To distinguish between the volatile memory used to hold data and programs while they are running and this nonvolatile memory used to store data and programs between runs, the term main memory or primary memory is used for the former, and secondary memory for the latter.

Secondary memory forms the next lower layer of the memory hierarchy. DRAMs have dominated main memory sincebut magnetic disks dominated secondary memory starting even earlier.

Because of their size and form factor, personal mobile devices use flash memory, a nonvolatile semiconductor memory, instead of disks. Although costing more per bit than disks, it is smaller, it comes in much smaller capacities, it is more rugged, and it is more power efficient than disks. Hence, flash memory is the standard secondary memory for PMDs. Alas, unlike disks and DRAM, flash memory bits wear out afterto 1, writes. Thus, file systems must keep track of the number of writes and have a strategy to avoid wearing out storage, such as by moving popular data.

Chapter 5 describes disks and flash memory in more detail. Just as the processor shown in Figure 1. Networks have become so popular that they are the backbone of current computer systems; a new personal mobile device or server without a network interface would be ridiculed.

A DVD disk is nonvolatile. A form of nonvolatile secondary memory composed of rotating platters coated with a magnetic recording material. Perhaps the most popular type of network is Ethernet. It can be up to a kilometer long and transfer at up to gigabits per second. Its length and speed make Ethernet useful to connect computers on the same floor of a building; hence, it is an example of what is generically called a local area network.

Local area networks are interconnected with switches that can also provide routing services and security. Wide area networks cross continents and are the backbone of the Internet, which supports the web. They are typically based on optical fibers and are leased from telecommunication companies. Networks have changed the face of computing in the last 40 years, both by becoming much more ubiquitous and by making dramatic increases in performance.

In the s, very few individuals had access to electronic mail, the Internet and web did not exist, and physically mailing magnetic tapes was the primary way to transfer large amounts of data between two locations.

Local area networks were almost nonexistent, and the few existing wide area networks had limited capacity and restricted access. As networking technology improved, it became considerably cheaper and had a significantly higher capacity. For example, the first standardized local area network technology, developed about 40 years ago, was a version of Ethernet that had a maximum capacity also called bandwidth of 10 million bits per second, typically shared by tens of, if not a hundred, computers.

Today, local area network technology offers a capacity of from 1 to gigabits per second, usually shared by at most a few computers. Optical communications technology has allowed similar growth in the capacity of wide area networks, from hundreds of kilobits to gigabits and from hundreds of computers connected to a worldwide network to millions of computers connected.

This dramatic rise in deployment of networking combined with increases in capacity have made network technology central to the information revolution of the last 30 years. For the last 15 decades, another innovation in networking is reshaping the way computers communicate. Wireless technology is widespread, which enabled the post-PC era. The ability to make a radio in the same low-cost semiconductor technology CMOS used for memory and microprocessors enabled a significant improvement in price, leading to an explosion in deployment.

Currently available wireless technologies, called by the IEEE standard name Wireless technology is quite a bit different from wire-based networks, since all users in an immediate area share the airwaves. A nonvolatile semiconductor memory. It is cheaper and slower than DRAM but more expensive per bit and faster than magnetic disks. For each technology, list its volatility, approximate relative access time, and approximate relative cost compared to DRAM.

Since this technology shapes what computers will be able to do and how quickly they will evolve, we believe all computer professionals should be familiar with the basics of integrated circuits. The integrated circuit IC combined dozens to hundreds of transistors into a single chip.

When Gordon Moore predicted the continuous doubling of resources, he was forecasting the growth rate of the number of transistors per chip. To describe the tremendous increase in the number of transistors from hundreds to millions, the adjective very large scale is added to the term, creating the abbreviation VLSI, for very large-scale integrated circuit. This rate of increasing integration has been remarkably stable.

For 35 years, the industry has consistently quadrupled capacity every 3 years, resulting in an increase in excess of 16, times! To understand how to manufacture integrated circuits, we start at the beginning. The manufacture of a chip begins with silicon, a substance found in sand.

Because silicon does not conduct electricity well, it is called a semiconductor. A VLSI circuit, then, is just billions of combinations of conductors, insulators, and switches manufactured in a single small package. Source: Computer Museum, Boston, with extrapolated by the authors. See Section 1. The y-axis is measured in kibibits bits. In recent years, the rate has slowed down and is somewhat closer to doubling every three years.

The manufacturing process for integrated circuits is critical to the cost of the chips and hence important to computer designers.

The process starts with a silicon crystal ingot, which looks like a giant sausage. Today, ingots are 8—12 inches in diameter and about 12—24 inches long.

An ingot is finely sliced into wafers no more than 0. These wafers then go through a series of processing steps, during which patterns of chemicals are placed on each wafer, creating the transistors, conductors, and insulators discussed earlier. After being sliced from the silicon ingot, blank wafers are put through 20 to 40 steps to create patterned wafers see Figure 1.

These patterned wafers are then tested with a wafer tester, and a map of the good parts is made. Next, the wafers are diced into dies see Figure 1.

In this figure, one wafer produced 20 dies, of which 17 passed testing. X means the die is bad. These good dies are then bonded into packages and tested one more time before shipping the packaged parts to customers. One bad packaged part was found in this final test. These defects, as they are called, make it virtually impossible to manufacture a perfect wafer. The simplest way to cope with imperfection is to place many independent components on a single wafer.

The patterned wafer is then chopped up, or diced, into these components, called dies and more informally known as chips. Dicing enables you to discard only those dies that were unlucky enough to contain the flaws, rather than the whole wafer.

This concept is quantified by the yield of a process, which is defined as the percentage of good dies from the total number of dies on the wafer. The cost of an integrated circuit rises quickly as the die size increases, due both to the lower yield and to the fewer dies that fit on a wafer. To reduce the cost, using the next generation process shrinks a large die as it uses smaller sizes for both transistors and wires.

This improves the yield and the die count per wafer. According to AnandTech1, each Ice Lake die is These packaged parts are tested a final time, since mistakes can occur in packaging, and then they are shipped to customers. While we have talked about the cost of chips, there is a difference between cost and price.

Margins can be higher on unique chips that come from only one company, like microprocessors, versus chips that are commodities supplied by several companies, like DRAMs.

The price fluctuates based on the ratio of supply and demand, and it is easy for multiple companies to build more chips than the market demands. The second is an approximation, since it does not subtract the area near the border of the round wafer that cannot accommodate the rectangular dies see Figure 1. The final equation is based on empirical observations of yields at integrated circuit factories, with the exponent related to the number of critical processing steps.

Hence, depending on the defect rate and the size of the die and wafer, costs are generally not linear in the die area. Check A key factor in determining the cost of an integrated circuit is volume. Which of Yourself the following are reasons why a chip made in high volume should cost less? With high volumes, the manufacturing process can be tuned to a particular design, increasing the yield. It is less work to design a high-volume part than a low-volume part. The masks used to make the chip are expensive, so the cost per chip is lower for higher volumes.

Engineering development costs are high and largely independent of volume; thus, the development cost per die is lower with high-volume parts. High-volume parts usually have smaller die sizes than low-volume parts and therefore, have higher yield per wafer. The scale and intricacy of modern software systems, together with the wide range of performance improvement techniques employed by hardware designers, have made performance assessment much more difficult.

When trying to choose among different computers, performance is an important attribute. Accurately measuring and comparing different computers is critical to purchasers and therefore, to designers. The people selling computers know this as well. Hence, understanding how best to measure performance and the limitations of those measurements is important in selecting a computer.

The rest of this section describes different ways in which performance can be determined; then, we describe the metrics for measuring performance from the viewpoint of both a computer user and a designer.

We also look at how these metrics are related and present the classical processor performance equation, which we will use throughout the text. Defining Performance When we say one computer has better performance than another, what do we mean? Although this question might seem simple, an analogy with passenger airplanes shows how subtle the question of performance can be. If we wanted to know which of the planes in this table had the best performance, we would first need to define performance.

For example, considering different measures of performance, we see that the plane with the highest cruising speed was the Concorde retired from service inthe plane with the longest range is the Boeing LR, and the plane with the largest capacity is the Airbus A The last column shows the rate at which the airplane transports passengers, which is the capacity times the cruising speed ignoring range and takeoff and landing times.

This still leaves two possible definitions. If you were interested in transporting passengers from one point to another, however, the Airbus A would clearly be the fastest, as the last column of the figure shows. Similarly, we can define computer performance in several distinct ways. As an individual computer user, you are interested in reducing response time—the time between the start and completion of a task—also referred to as execution time. Datacenter managers often care about increasing throughput or bandwidth—the total amount of work done in a given time.

Hence, in most cases, we will need different performance metrics as well as different sets of applications to benchmark personal mobile devices, which are more focused on response time, versus servers, which are more focused on throughput. Another measure of performance, it is the number of tasks completed per unit time. Replacing the processor in a computer with a faster version 2. Adding additional processors to a system that uses multiple processors for separate tasks—for example, searching the web ANSWER Decreasing response time almost always improves throughput.

Hence, in case 1, both response time and throughput are improved. In case 2, no one task gets work done faster, so only throughput increases. If, however, the demand for processing in the second case was almost as large as the throughput, the system might force requests to queue up. In this case, increasing the throughput could also improve response time, since it would reduce the waiting time in the queue.

Thus, in many real computer systems, changing either execution time or throughput often affects the other. In discussing the performance of computers, we will be primarily concerned with response time for the first few chapters. To maximize performance, we want to minimize response time or execution time for some task.

In discussing a computer design, we often want to relate the performance of two different computers quantitatively. In the above example, we could also say that computer B is 1.

Because performance and execution time are reciprocals, increasing performance requires decreasing execution time. The actual time the CPU spends computing for a specific task. Understanding Program Performance Time is the measure of computer performance: the computer that performs the same amount of work in the least time is the fastest.

Program execution time is measured in seconds per program. However, time can be defined in different ways, depending on what we count. The most straightforward definition of time is called wall clock time, response time, or elapsed time. Computers are often shared, however, and a processor may work on several programs simultaneously. In such cases, the system may try to optimize throughput rather than attempt to minimize the elapsed time for one program.

Hence, we might want to distinguish between the elapsed time and the time over which the processor is working on our behalf. Remember, though, that the response time experienced by the user will be the elapsed time of the program, not the CPU time. Differentiating between system and user CPU time is difficult to do accurately, because it is often hard to assign responsibility for operating system activities to one user program rather than another and because of the functionality differences between operating systems.

For consistency, we maintain a distinction between performance based on elapsed time and that based on CPU execution time. We will use the term system performance to refer to elapsed time on an unloaded system and CPU performance to refer to user CPU time.

We will focus on CPU performance in this chapter, although our discussions of how to summarize performance can be applied to either elapsed time or CPU time measurements. Different applications are sensitive to different aspects of the performance of a computer system.

Total elapsed time measured by a wall clock is the measurement of interest. In 33 1. To improve the performance of a program, one must have a clear definition of what performance metric matters and then proceed to find performance bottlenecks by measuring program execution and looking for the likely bottlenecks.

In the following chapters, we will describe how to search for bottlenecks and improve performance in various parts of the system. In particular, computer designers may want to think about a computer by using a measure that relates to how fast the hardware can perform basic functions. Almost all computers are constructed using a clock that determines when events take place in the hardware.

These discrete time intervals are called clock cycles or ticks, clock ticks, clock periods, clocks, cycles. Designers refer to the length of a clock period both as the time for a complete clock cycle e. In the next subsection, we will formalize the relationship between the clock cycles of the hardware designer and the seconds of the computer user.

Suppose we know that an application that uses both personal mobile devices and the Cloud is limited by network performance. For the following changes, state whether only the throughput improves, both response time and throughput improve, or neither improves. An extra network channel is added between the PMD and the Cloud, increasing the total network throughput and reducing the delay to obtain network access since there are now two channels. The networking software is improved, thereby reducing the network communication delay, but not increasing throughput.

More memory is added to the computer. How long will computer C take to run that application? If we could relate these different metrics, we could determine the effect of a design change on the performance as experienced by the user. The time for one clock period, usually of the processor clock, which runs at a constant rate.

As we will see in later chapters, the designer often faces a trade-off between the number of clock cycles needed for a program and the length of each cycle. Many techniques that decrease the number of clock cycles may also increase the clock cycle time. We are trying to help a computer designer build a computer, B, which will run this program in 6 seconds. The designer has determined that a substantial increase in the clock rate is possible, but this increase will affect the rest of the CPU design, causing computer B to require 1.

What clock rate should we tell the designer to target? Instruction Performance The performance equations above did not include any reference to the number of instructions needed for the program.

However, since the compiler clearly generated instructions to execute, and the computer had to execute the instructions to run the program, the execution time must depend on the number of instructions in a program. One way to think about execution time is that it equals the number of instructions executed multiplied by the average time per instruction. Since different instructions may take different amounts of time depending on what they do, CPI is an average of all the instructions executed in the program.

CPI provides one way of comparing two different implementations of the identical instruction set architecture, since the number of instructions executed for a program will, of course, be the same.

Using the Performance Equation Suppose we have two implementations of the same instruction set architecture. Which computer is faster for this program and by how much? We can use these formulas to compare two different implementations or to evaluate a design alternative if we know its impact on these three parameters.

Which will be faster? What is the CPI for each sequence? Therefore, sequence 1 executes fewer instructions. Since code sequence 2 takes fewer overall clock cycles but has more instructions, it must have a lower CPI.

For example, changing the instruction set to lower the instruction count may lead to an organization with a slower clock cycle time or higher CPI that offsets the improvement in instruction count.

Similarly, because CPI depends on the type of instructions executed, the code that executes the fewest number of instructions may not be the fastest. How can we determine the value of these factors in the performance equation? We can measure the CPU execution time by running the program, and the clock cycle time is usually published as part of the documentation for a computer. The instruction count and CPI can be more difficult to obtain. Of course, if we know the clock rate and CPU execution time, we need only one of the instruction count or the CPI to determine the other.

We can measure the instruction count by using software tools that profile the execution or by using a simulator of the architecture. Alternatively, we can use hardware counters, which are included in most processors, to record a variety of measurements, including the number of instructions executed, the average CPI, and often, the sources of performance loss.

Since the instruction count depends on the architecture, but not on the exact implementation, we can measure the instruction count without knowing all the details of the implementation.

The CPI, however, depends on a wide variety of design details in the computer, including both the memory system and the processor structure as we will see in Chapter 4 and Chapter 5as well as on the mix of instruction types executed in an application. Thus, CPI varies by application, as well as among implementations with the same instruction set. When comparing two computers, you must look at all three components, which combine to form execution time.

If some of the factors are identical, like the clock rate in the above example, performance can be determined by comparing all the nonidentical factors. Several exercises at the end of this chapter ask you to evaluate a series of computer and compiler enhancements that affect clock rate, CPI, and instruction count. In Section 1. The performance of a program depends on the algorithm, the language, the compiler, the architecture, and the actual hardware.

The following table summarizes how these components affect the factors in the CPU performance equation. Hardware or software component Affects what? Algorithm Instruction count, CPI The algorithm determines the number of source program instructions executed and hence the number of processor instructions executed.

The algorithm may also affect the CPI, by favoring slower or faster instructions. For example, if the algorithm uses more divides, it will tend to have a higher CPI. Programming language Instruction count, CPI Compiler Instruction count, CPI Instruction set architecture Instruction count, clock rate, CPI The programming language certainly affects the instruction count, since statements in the language are translated to processor instructions, which determine instruction count.

The language may also affect the CPI because of its features; for example, a language with heavy support for data abstraction e. The efficiency of the compiler affects both the instruction count and The Bus Should Have Fell On Lars - Various - Static Exemplar # 14 / CD Vol. # 3 (CDr) cycles per instruction, since the compiler determines the translation of the source language instructions into computer instructions. The instruction set architecture affects all three aspects of CPU performance, since it affects the instructions needed for a function, the cost in cycles of each instruction, and the overall clock rate of the processor.

Intel calls this Turbo mode. A given application written in Java runs 15 seconds on a desktop processor. A new Java compiler is released that requires only 0. Unfortunately, it increases the CPI by 1. How fast can we expect the application to run using this new compiler? Pick the right answer from the three choices below: 15 0. Both clock rate and power increased rapidly for decades and then flattened or dropped off recently.

The reason they grew together is that they are correlated, and the reason for their recent slowing is that we have run into the practical power limit for cooling commodity microprocessors. The Pentium 4 made a dramatic jump in clock rate and power but less so in performance.

The Prescott thermal problems led to the abandonment of the Pentium 4 line. The Core 2 line reverts to a simpler pipeline with lower clock rates and multiple processors per chip. The Core i5 pipelines follow in its footsteps. Battery life can trump performance in the personal mobile device, and the architects of warehouse scale computers try to reduce the costs of powering and cooling 50, servers as the costs are high at this scale.

Just as measuring time in seconds is a safer evaluation of program performance than a rate like MIPS see Section 1. The dominant technology for integrated circuits is called CMOS complementary metal oxide semiconductor. For CMOS, the primary source of energy consumption is so-called dynamic energy—that is, energy that is consumed when transistors switch states from 0 to 1 and vice versa. The capacitive load per transistor is a function of both the number of transistors connected to an output called the fanout and the technology, which determines the capacitance of both wires and transistors.

With regard to Figure 1. After the requirements are tested, the prototype is discarded and the software engineer develops the complete software application based on the requirements. This specification is based on the TROLL specification language and has to be refined to a complete system specification. OBLOG also employs a CASE tool for introducing specifications and enables a developer to build a prototype by supplying rewrite rules to convert the specifications into code for the prototype.

The aim of OSA is to develop a method that enables system designers to work with different levels of formalism, ranging from informal to mathematically rigorous. In this context, this kind of tunable formalism encourages both theoreticians and practitioners to work with the same model allowing them to explore the difficulties encountered in making model and languages equivalent and resolve these difficulties in the context of OSA for a particular language.

A different approach has been proposed by SOFL Structured-Object-based-Formal Languagewhose aim is to address the integration of formal methods into established industrial software processes using an integration of formal methods, structured analysis and specifications, and an object-based method.

SOFL facilitates the transformation from requirements specifications in a structured style to a design in an object-based style and facilitates the transformation from designs to programs in the appropriate style.

In accordance with the previous arguments, the SOFL proposal attempts to overcome the fact that formal methods have not been largely used in industry, by finding mechanisms to link object-oriented methodology and structured techniques with formal methods, e. Combining structured and objected-oriented techniques in a single method, however, makes it difficult to clarify the method semantics; thus, effective tool support is necessary for checking consistency.

Still another approach is known as TRADE Toolkit for Requirements and Design Engineeringwhose conceptual framework distinguishes external system interactions from internal components. TRADE contains techniques from structured and object-oriented specification and design methods. Although these approaches are of some help for the first phase, i. The newly developed code, due to the mismatch between the problem space and the solution space, will commonly contain coding errors and will need to be extensively tested and debugged.

Even if the prototype is not discarded and used as skeleton for the final application, the software developer must still develop additional code, especially to implement the user interface and error processing.

In this case, there still remains the need for testing and debugging the code the programmer has written.

The rule-rewriting approach of OBLOG, moreover, fails to address this need, because the difficulties associated with programming are merely shifted one level back, to the development of the rewriting rules in an unfamiliar, proprietary language. Other approaches include those of Rational and Sterling, but these are not based on a formal language. Therefore, there exists a long-felt need for improving the software engineering process, especially for reducing the amount of time spent in the programming and testing phases.

In addition, a need exists for a way to reducing programming errors during the course of developing a robust software application. Furthermore, there is also a need for facilitating the maintenance of software applications when their requirements have changed.

The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:. Defining these classes starts the process of building the Formal Specification in the high level repository. This dialog box is used to define whether each attribute is constant, variable or derived, the type of data it contains and other things. The functional model relates services mathematically through well-formed formulas to the values of attributes these services act upon.

An automatic software production system is described. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.

Computer system includes a bus or other communication mechanism for communicating information, and a processor coupled with bus for processing information.

Computer system also includes a main memorysuch as a random access memory RAM or other dynamic storage device, coupled to bus for storing information and instructions to be executed by processor Main memory also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor Computer system further includes a read only memory ROM or other static storage device coupled to bus for storing static information and instructions for processor A storage devicesuch as a magnetic disk or optical disk, is provided and coupled to bus for storing information and instructions.

Computer system may be coupled via bus to a displaysuch as a cathode ray tube CRTfor displaying information to a computer user. An input deviceincluding alphanumeric and other keys, is coupled to bus for communicating information and command selections to processor Another type of user input device is cursor controlsuch as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor and for controlling cursor movement on display This input device typically has two degrees of freedom in two axes, a first axis e.

The invention is related to the use of computer system for automatic software production. According to one embodiment of the invention, automatic software production is provided by computer system in response to processor executing one or more sequences of one or more instructions contained in main memory Such instructions may be read into main memory from another computer-readable medium, such as storage device Execution of the sequences of instructions contained in main memory causes processor to perform the process steps described herein.

One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in main memory In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.

Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as storage device Volatile media include dynamic memory, such as main memory Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise bus Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency RF and infrared IR data communications.

Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor for execution.

For example, the instructions may initially be borne on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal.

An infrared detector coupled to bus can receive the data carried in the infrared signal and place the data on bus Bus carries the data to main memoryfrom which processor retrieves and executes the instructions. The instructions received by main memory may optionally be stored on storage device either before or after execution by processor Computer system also includes a communication interface coupled to bus Communication interface provides a two-way data communication coupling to a network link that is connected to a local network For example, communication interface may be an integrated services digital network ISDN card or a modem to provide a data communication connection to a corresponding type of telephone line.

As another example, communication interface may be a local area network LAN card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link typically provides data communication through one or more networks to other data devices.

For example, network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider ISP Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interfacewhich carry the digital data to and from computer systemare exemplary forms of carrier waves transporting the information.

Computer system can send messages and receive data, including program code, through the network snetwork linkand communication interface In the Internet example, a server might transmit a requested code for an application program through InternetISPlocal network and communication interface In accordance with the invention, one such downloaded application provides for automatic software production as described herein.

In this manner, computer system may obtain application code in the form of a carrier wave. The automatic software production system is configured to accept requirements as input, and produce a complete, robust application including both system logic and user-interface codea database schemaand documentation In one implementation, the automatic software production system includes a Computer Aided Software Engineering CASE tool front end to allow a user to input the requirements, a validator for validating the input requirementsand several translators to convert the validated input requirements into a complete, robust application These translators may include a system logic translatora user-interface translatora database generatorand a documentation generator During operation of one embodiment, requirements specifying a Conceptual Model for the application are gathered using diagrams and textual interactive dialogs presented by the CASE tool Preferably, the CASE tool employs object-oriented modeling techniques to avoid the complexity typically associated with the use of purely textual formal methods.

In one implementation, the Conceptual Model is subdivided into four complementary models: an object model, a dynamic model, a functional model, and a presentation model. These models are described in greater detail hereinafter.

After gathering the requirementsthe CASE tool stores the input requirements as a formal specification in accordance with a formal specification language, for example, the OASIS language, which is an object-oriented language for information systems developed at developed at the Technical University of Valencia, Spain.

Using extended grammar defined by the formal language, the validator syntactically and semantically validates the formal specification to be correct and complete. If the formal specification does not pass validation, no application is allowed to be generated; therefore, only correct and complete applications are allowed be generated.

However, in other embodiments, the database translator just outputs a file having The Bus Should Have Fell On Lars - Various - Static Exemplar # 14 / CD Vol. # 3 (CDr) file structure that is known to the system logic created by the system logic translator Basically, the structure of the database or table or other data structure that database generator creates is defined by the objects and classes defined in the Conceptual Model.

The only thing that is necessary is that translator provide at least a place to store the states of the objects in the system as defined by their attribute values and that the attribute values be stored in some format known to the system logic translator such that the values of attributes can be retrieved from whatever data structure is created by translator In other species, the database generator creates a data structure defined by the Conceptual Model as well as for storing other data from other sources or entered by remote client computers for use by the code created by the system logic translator In addition, one implementation also employs a document generator to automatically generate serviceable system documentation from the information introduced in the Conceptual Model.

As mentioned herein above, the CASE tool preferably employs object-oriented modeling techniques to avoid the complexity typically associated with the use of purely textual formal methods. Rather, four complementary models, that of the object model, the dynamic model, the functional model and the presentation model, are employed to allow a designer to specify the system requirements.

This feature enables the introduction of well-defined expressions in the specification, which is often lacking in the conventional methodologies. In particular, the CASE tool enforces the restriction that only the information relevant for filling a class definition in the formal specification language can be introduced. The use of a formal specification, input by means of the CASE tooltherefore provides the environment to validate and verify the system in the solution space, thereby obtaining a software product that is functionally equivalent to the specification as explained hereinafter.

Nevertheless this is always done preserving this external view which is compliant with the most extended modeling techniques, as stated before. In this way, the arid formalism characteristic of many conventional approaches is hidden from the designer, who is made to feel comfortable using a graphical modelling notation. With respect to the notation, conceptual modelling in one embodiment employs diagrams that are compliant with the Unified Modelling Language UML ; thus, system designers need not learn another graphical notation in order to model an information system.

In accordance with the widely accepted object oriented conceptual modeling principles, the Conceptual Model is subdivided into an object model, a dynamic model, and a functional model. These three models, however, are insufficient by themselves to specific a complete application, because a complete application also requires a user interface.

In one embodiment, the CASE tool collects information organized around projects which correspond to different applications.

Each project built by the CASE tool can include information about classes, relationships between classes, global transactions, global functions, and views. Each class contains attributes, services, derivations, constraints, transaction formulas, triggers, display sets, filters, population selection patterns, a state transition diagram and formal interfaces. In addition to the information in these lists, a class can also store a name, alias and a default population selection interface pattern.

Extra information is stored as remarks that the designer can input information about why a class does exist in a model. Each attribute can have the following characteristics: name, formal data type e. Each attribute can also include a list of valuations, which are formulas that declare how the object's state is changed by means of events.

Valuation formulas are structured in the following parts: condition that must be satisfied to apply the effectevent and effect of the event to the particular attribute. An attribute may also include user interface patterns belonging to the presentation model to be applied in the corresponding services arguments related to the attribute.

Services can be of two types: events and transactions. Events are atomic operations while transactions are composed of services which can be in turn events or transactions. Every service can have the following characteristics: name, type of service event or transactionservice alias, remarks and a help message.

Events can be of three types: new, destroy or none of them. Events can also be shared by several classes of the project. Shared events belong to all classes sharing them. Transactions have a formula that expresses the composing of services.

In addition to this information, services store a list of arguments whose characteristics are: name, data type, whether nulls are allowed as a valid value, whether the argument represents a set of objects collectiondefault value, alias and remarks.

Additionally, for each argument user-interface patterns related to arguments are: introduction pattern, population selection pattern, defined selection pattern and dependency pattern.

The class can also store a list of derivations, and constraints. Each derivation specifies a list of pairs condition-formula, specifying which formula will be applied under every condition. Each constraint is a well formed formula plus the error message that will be displayed when the constraint was violated.

For the dynamic constraints, the formula will be internally translated into a graph which constitutes the guide for its evaluation. A class can also store triggers. Each trigger may be composed of trigger target specified in terms of self, class or object, trigger condition, triggered action service plus a list of possible agents to be activated and a list of default values associated with the arguments of the related service.

A class can also have display sets, filters and population selection patterns as user-interface patterns of the presentation model affecting the class. Each display set can store elements of visualization attributes to be displayed to the user. Each filter is composed of a well formed formula and a list of auxiliary variables that are useful to define the formula.

The population selection pattern is related to a display set and a filter. Classes also have a State Transition Diagram which is a set of states and transitions between them.

Each state transition is related to an action service plus list of possible agents that can change the state of the object. Actions may have preconditions and the corresponding error message to be displayed if the precondition does not hold. Preconditions are formulas that need to be satisfied in order to execute the corresponding action. In case of non-deterministic transitions, determinism is achieved by means of labelling each transition with a control condition.

A control condition is a formula that specifies which state transition will take effect. Finally, a class can store a list of interfaces. Each interface stores the list of services that an actor can execute agents and the list of attributes that can be observed. The model also maintains information on relationships between classes, which can be of two types: aggregation and inheritance.

Each aggregation relationship captures the information about cardinalities, whether the aggregation is static or dynamic, whether the aggregation is inclusive or referential, whether the aggregation has an identification dependence, and a grouping clause when the aggregation is multi-valued. Each inheritance relationship stores the name of the parent class, the name of the child class and whether the specialization is temporary or permanent. Finally, if the specialization is permanent it stores a well formed formula on constant attributes as specialization condition.

Finally, the project can also capture a list of global transactions in which the relevant characteristics to be stored include the name of the global interaction, the formula, and the list of arguments. A list of global functions can also be captured, in which each function stores a name, a data type of the returned value, a set of arguments similar to servicesand comments about the function.

A project may have a set of views, that constitute the particular vision that a set of selected agent classes has of the system. That is, the set of formal interfaces attributes and services allowed per agent class. Each agent class has a list of interfaces.

The object model is a graphical model that allows the system designer to specify the entities employed in the application in an object-oriented manner, in particular, by defining classes for the entities. Thus, the class definitions include, for example, attributes, services and class relationships aggregation and inheritance.

Additionally, agent relationships are specified to state which services that objects of a class are allowed to activate. An agent relationship between classes means one class can invoke the services of another class.

Classes, in the object modelare represented as rectangles with three areas: the class name, the attributes and the services. In the example, the object model includes a loan class with attributes to indicate a loan code and a loan date for when the loan was made.

The loan class also includes two services methods including one for loaning a book and another for returning the book The object model also includes a book class having attributes that specify the author of the book, a book codeand a state e.

Each reader belonging to the library is described with the reader classwhose attributes include the agethe number of books checked out by the reader, and the name of the reader. An unreliable reader class is also part of the object model to indicate for those readers who cannot be trusted e. An unreliable reader may be forgiven by a librarian In an object modelinheritance relationships are represented by using arrows to link classes. For example, the unreliable reader class is connected to the reader class with an arrow; thus, the unreliable reader class is specified to inherit from, or in other terms is a subclass of, the reader class The arrow linking the subclass and the base class can be leveled with a specialization condition or an event that activates or cancels the child role.

Thus, if a reader is punished, that person becomes an unreliable reader Conversely, if an unreliable reader is forgiventhat person becomes a normal reader Aggregation relationships are represented in the object model by using a line with a diamond. The class which has a diamond closest to it is called the composite class and the other class is the component class.

The aggregation determines how many components can be attached to a given composite and vice versa cardinality is the minimum and maximum numbers of components and composites that can participate in the relationship.

In the example, a book and a reader are aggregated in a loanbecause a loan involves lending a book to a reader of the library. The representation of aggregation also includes its cardinalities in both directions i. In the example, the cardinality of the loan:book relationship from loan to book is because exactly one book is the subject of a loan in this Conceptual Model, and from book to loan is because a book can be lent or not in a certain moment.

Furthermore, agent relationships are represented by using dotted lines that connect the associated client class and services of the server class. In the example, a librarian is an agent of a forgive service of the unreliable reader class ; thus, there is a dotted line between the forgive service and the librarian class As another example, readers are agents of the loan book and return book services.

Finally, shared events are represented by using solid lines that connect the associated events between two classes. Additional information in the object model The Bus Should Have Fell On Lars - Various - Static Exemplar # 14 / CD Vol.

# 3 (CDr) specified to complete the formal description of the class. Specifically, for every class in the object model, the following information is captured as shown in TABLE 1. Additional information associated with aggregation and inheritance is also collected. For aggregated classes, the additional information can specify if the aggregation is an association or a composition in accordance with the UML characterization, or if the aggregation is static or dynamic.

For inheritance hierarchies, the additional information can specify if a specialization produced by the inheritance is permanent or temporal. If the specialization is permanent, then the corresponding conditions on the constant attributes must characterize the specialization relationship.

Some applications may require a large number of classes to fully specify. In this case, classes may be gathered into clusters. Clusters make it easier for the designer or system analyst to understand the application, one cluster at a time. Thus, clusters help reduce the complexity of the view of the object model. The The Bus Should Have Fell On Lars - Various - Static Exemplar # 14 / CD Vol.

# 3 (CDr) class architecture is specified with the object model. Additional features, however, such as which object life cycles can be considered valid, and which interobject communication can be established, also have to be input in the system specification. For this purpose, a dynamic model is provided. The dynamic model specifies the behavior of an object in response to services, triggers and global transactions. In one embodiment, the dynamic model is represented by two diagrams, a state transition diagram and an object interaction diagram.

The state transition diagram STD is used to describe correct behavior by establishing valid object life cycles for every class. A valid life refers to an appropriate sequence of states that characterizes the correct behavior of the objects that belong to a specific class. Transitions represent valid changes of state. A transition has an action and, optionally, a control condition or guard. An action is composed of a service plus a subset of its valid agents defined in the Object Model.

Actions might have one precondition that must be satisfied in order to accept its execution. A blank circle represents the state previous to existence of the object.

Transitions that have this state as source must be composed of creation actions. Similarly, a bull's eye represents the state after destruction of the object. Transitions having this state as destination must be composed of destruction actions. Intermediate states are represented by circles labeled with an state name. Accordingly, the state transition diagram shows a graphical representation of the various states of an object and transitions between the states.

States are depicted in the exemplary state transition diagram by means of a circle labeled with the state name. Referring to FIG. Transitions are represented by solid arrows from a source state to a destination state.

The middle of the transition arrow is labeled with a text displaying the action, precondition and guards if any. In other words, the Conceptual Model requires that a book can only be returned if the book has been lent. The object interaction diagram specifies interobject communication.

Two basic interactions are defined: triggers, which are object services that are automatically activated when a pre-specified condition is satisfied, and global transactions, which are themselves services involving services of different objects and or other global transactions. There is one state transition diagram for every class, but only one object interaction diagram for the whole Conceptual Model, where the previous interactions will be graphically specified.

In one embodiment, boxes The Bus Should Have Fell On Lars - Various - Static Exemplar # 14 / CD Vol. # 3 (CDr) with an underlined name represent class objects. Trigger specifications follow this syntax: destination::action if trigger-condition.

The first component of the trigger is the destination, i. The trigger destination can be the same object where the condition is satisfied i.

Finally, the triggered service and its corresponding triggering relationship are declared. Global Transactions are graphically specified by connecting the actions involved in the declared interaction. These actions are represented as solid lines linking the objects boxes that provide them. Accordingly, communication between objects and activity rules are described in the object interaction diagram, which presents graphical boxes, graphical triggers, and graphical interactions.

In the object interaction diagramthe graphical interactions is represented by lines for the components of a graphical interaction. Graphical boxes, such as reader boxare declared, in this case, as special boxes that can reference objects particular or generic such as a reader.

Graphical triggers are depicted using solid lines that have a text displaying the service to execute and the triggering condition. Components of graphical interactions also use solid lines. Each one has a text displaying a number of the interaction, and the action that will be executed. In the example, trigger indicates that the reader punish action is to be invoked when the number of books that a reader is currently borrowing reaches Many conventional systems take a shortcut when providing a functional model, which limits the correctness of a functional specification.

Sometimes, the model used breaks the homogeneity of the object-oriented models, as happened with the initial versions of OMT, which proposed using the structured DFDs as a functional model. The use of DFD techniques in an object modeling context has been criticized for being imprecise, mainly because it offers a perspective of the system the functional perspectivewhich differs from the other models the object perspective.

Other methods leave the free-specification of the system operations in the hands of the designer, which leads to inconsistencies. One embodiment of the present invention, however, employs a functional model that is quite different with respect to these conventional approaches.

In this functional model, the semantics associated with any change of an object state is captured as a consequence of an event occurrence. Basically, the functional model allows a SOSY modeler to specify a class, an attribute of that class and an event of that class and then define a mathematical or logical formula that defines how the attribute's value will be changed when this event happens.

In the preferred embodiment, condition-action pair is specified for each valuation. The condition is a single math or logic formula which specifies a condition which results in a value or logical value which can be mapped to only one of two possible values: true or false.

The action is a single math or logical formula which specifies how the value of the attribute is changed if the service is executed and the condition is true. In other embodiments, only a single formula that specifies the change to the attribute if the service is executed is required. The functional model is built in the preferred embodiment by presenting a dialog box that allows the user to choose a class, an attribute of that class and a service of that class and then fill in one or more formula or logical expressions condition-action or only action which controls how the value of that attribute will be changed when the service is executed.

The important thing about this is that the user be allowed to specify the mathematical or logical operation which will be performed to change the value of the attribute when the service is executed, and it is not critical how the user interface is implemented. Any means to allow a user to specify the class, the attribute of that class and the service of that class and then fill in a mathematical or logical expression which controls what happens to the specified attribute when the service is executed will suffice to practice the invention.

Every one of these mathematical expressions is referred to as a valuation. Every valuation has to have a condition and action pair in the preferred embodiment, but in other species, only an action need be specified. The condition can be any well formed formula resulting in a Boolean value which can be mapped to only one of two possible conditions: true or false.

This valuation formula can be only mathematical or only a Boolean logical expression or a combination of both mathematical operators and Boolean logical expressions. This data structure can be any format, but it must contain at least the above identified content. To define the functional model, the following information is decoratively specified by the SOSY modeler: how every event changes the object state depending on the arguments of the involved event, and the object's current state.

In particular, the functional model employs the concept of the categorization of valuations. Three types of valuations are defined: push-pop, state-independent and discrete-domain based. Each type fixes the pattern of information required to define its functionality. Push-pop valuations are those whose relevant events increase or decrease the value of the attribute by a given quantity, or reset the attribute to a certain value. State-independent valuations give a new value to the attribute involved independently of the previous attribute's value.

Discrete-domain valuations give a value to the attributes from a limited domain based on the attribute's previous value.


滴る朦朧 - DIR EN GREY - Dum Spiro Spero Member Commentary (CD), Any Downers? - Frank Zappa - You Are What You Is (Vinyl, LP, Album), Pebbles From The Sky, Doledrum - The Las - The Las (CD, Album), Hellfire Club - Imelda May - Tribal (Vinyl, LP, Album), Μη Βάζεις Μαύρο - Various - Επιτυχίες Της Minos (Vinyl, LP), Life Of Despair - Abscido - Entertainment In Another World (CD), Bela Hungaria, Long Way From L.A. - Canned Heat - Historical Figures And Ancient Heads (Vinyl, LP, Album), Cargo Cult - Dykehouse - Dynamic Obsolescence (CD, Album), Oh To Be In Love - Kate Bush - The Kick Inside (CD, Album), March Of The Poozers - Devin Townsend Project - Ziltoid Live At The Royal Albert Hall (CD, Album), Jeepers Creepers - Chet Grayson And The Country Clubbers - Swing At The Country Club (Vinyl, LP), Gotta Have You - Stevie Wonder - Music From The Movie "Jungle Fever" (CD)