Month: March 2016

Difference Between Prototype Model and Waterfall Model

Almost always, there are more than one ways of solving a problem. Though the end result might be the same, there are many different paths that can be taken to arrive at the solution. Similarly, each of the software development problems is amenable to a range of different solutions. Creation of a software application can be undertaken in many ways and a development team must adopt a process that is most ideally suited to the nature of the end product.

One of the most traditional approaches to software development is the waterfall model, while a more non conventional one is the prototype model. These software development models are influenced by the manufacturing processes of electronics and hardware industries. In this Buzzle article, I present the difference between prototype model and waterfall model by comparing their features.

Ideally, any company would like to adopt a software development process that makes optimum use of resources and delivers a bug free end product that perfectly meets users expectations and that too, within the set budget, as well as time frame. Before we look at the difference between the two models, let me provide you with a brief overview of how waterfall and prototype models work.

How Does the Waterfall Model Work?

When you look at the waterfall model, the words that come to your mind are ‘Structured’ and ‘Orderly’. The waterfall model is inspired by the ‘assembly line’ philosophy of the hardware industry, wherein every stage is initiated only after the successful conclusion of the previous one. It’s named as ‘Waterfall Model’ because every step is dependent on the earlier one and builds up or ‘flows’ from the work done in the previous phases.

The whole process of software development, according to the Waterfall model, begins with the understanding of the requirements and expectations from the customer or end user. After the requirements are clearly understood by the developers, analysis and design of the software actually begins.

This phase is the most intensive of all and involves the top developers, who ideate a design that would perfectly meet all user requirements and be robust enough for implementation. Once the design is ready, coding begins. Separate teams will focus on a small part of the entire coding project and all these parts of design will be put together in the integration phase that follows.

Once the program is ready after integration, the testing and debugging phase begins. Here every feature of the software and every one of its functions are tested and bugs if any, are rectified. This is followed by the actual on-site implementation of the application for the client. A dedicated team takes care of future maintenance of the software and customer service.

How Does the Prototype Model Work?

Let us now see what the prototype model of software development is like. This model is radically different from the waterfall model in many ways. As the name itself suggests, this process involves the creation of ‘prototypes’ or ‘raw models’ of the final product, right at the start, which are continuously improved through user feedback and developer efforts, until a final product which exactly confirms with user requirements is created.

The developer provides the client with a rough prototype application, right after he has been fed the requirements. This is a preliminary and ‘sketchy’ model of the final product, with basic functionality and user interface. By analyzing the prototype, the client then provides feedback to the developers, about whether this is the kind of thing he wants.

According to the suggested changes and overall client report, the prototype is reworked upon and it keeps improving through better designing, until it is transformed into the program which satisfies all client requirements. This is a kind of ‘Interactive’ design, wherein the end user is involved in every stage of development. Every evolving prototype goes through testing and debugging phases, including the final product, before deployment.

Prototype Model Vs. Waterfall Model

Now that you have a basic understanding of what the waterfall model and prototype model are all about, let me point out the prime differences in these two software design philosophies. The waterfall model directly delivers the final product to the user and his feedback is only taken in, before the design phase. Conversely, the prototype model creates several rough working applications and involves constant user interaction, until the developers come up with the final application, which satisfies the user.

While the waterfall model is linear, the prototype model is non linear and evolutionary in nature. Both processes have their merits and demerits. According to experts, the prototype model is well suited for online applications where user interfaces are the most important component and clients are not clear about what they exactly need in the final product.

On the other hand, the waterfall model is better suited for a more conventional software projects, where user requirements are clear, right from the start. A prototype model ensures users involvement which makes last minute changes possible. The waterfall model makes it difficult to implement any changes suggested by the user, after initial specification.

To conclude, it’s apparent that prototype model is best suited when the client himself is not sure of what he wants and waterfall model is a safe bet, when the end user or client is clear about what he wants. Before deciding which model would be ideally suited for your own software development project, study the nature of the client requirements and choose a process which would give you the best chances of creating a satisfying end product.

Software Engineering Reason and a Concept

Some decades back, when computer was just born and was completely new thing to people. Very few people could operate them and software was not given very much of emphasis. That time hardware was the most important part that decided the cost of implementation and success rate of the system developed. Very few people were known to programming. Computer programming was very much considered to be an art gifted to few rather than skill of logical thinking. This approach was full of risk and even in most of the cases, the system that was undertaken for development, never met the completion. Soon after that, some emphasis was given towards software development. This started a new era of software development. Slowly, people started giving more importance to software development.

People, who wrote software, hardly followed any methodology, approach or discipline that would lead to a successful implementation of a bug-free and fully functional system. There hardly existed any specific documentation, system design approach and related documents etc. These things were confined to only those who developed hardware systems. Software development plans and designs were confined to only concepts in mind.

Even after number of people jumped in this field, because of the lack of proper development strategies, documentations and maintenance plans, the software system that was developed was costlier than before, it took more time to develop the entire system (even sometimes, it was next to impossible to predict the completion date of the system that was under development), the lines of codes were increased to a very large number increasing the complexity of the project/software, as the complexity of the software increased it also increased the number of bugs/problems in the system. Most of the time, the system that was developed, was unusable by the customer because of problems such as late delivery (generally very very very late) and also because of number of bugs, there were no plans to deal with situations where in the system was needed to be maintained, this led to the situation called ‘Software Crisis’. Most of software projects, which were just concepts in brain but had no standard methodologies, practices to follow, experienced failure, causing loss of millions of dollars.

‘Software Crisis’ was a situation, which made people think seriously about the software development processes, and practices that could be followed to ensure a successful, cost-effective system implementation, which could be delivered on time and used by the customer. People were compelled to think about new ideas of systematic development of software systems. This approach gave birth to the most crucial part of the software development process, this part constituted the most modern and advanced thinking and even the basics of any project management, it needed the software development process be given an engineering perspective thought. This approach is called ‘Software Engineering’.

Standard definition of ‘Software Engineering’ is ‘the application of systematic, disciplined, quantifiable, approach to the development, operation and maintenance of software i.e. the application of engineering to software.’

The Software Engineering subject uses a systematic approach towards developing any software project. It shows how systematically and cost-effectively a software project can be handled and successfully completed assuring higher success rates. It includes planning and developing strategies, defining time-lines and following guidelines in order to ensure the successful completion of particular phases, following predefined Software Development Life-Cycles, using documentation plans for follow-ups etc. in order to complete various phases of software development process and providing better support for the system developed.

Software Engineering takes an all-round approach to find out the customer’s needs and even it asks customers about their opinions hence proceeding towards development of a desired product. Various methodologies/practices such as ‘Waterfall Model’, ‘Spiral Model’ etc. are developed under Software Engineering which provides guidelines to follow during software development ensuring on time completion of the project. These approaches help in dividing the software development process into small tasks/phases such as requirement gathering and analysis, system design phase, coding phase etc. that makes it very much easy to manage the project. These methods/approaches also help in understanding the problems faced (which occur during the system development process and even after the deployment of the system at customer’s site) and strategies to be followed to take care of all the problems and providing a strong support for the system developed (for example: the problems with one phase are resolved in the next phase, and after deployment of the product, problems related to the system such as queries, bug that was not yet detected etc. which is called support and maintenance of the system. These all strategies are decided while following the various methodologies).

Today, almost 100% software development projects use Software Engineering concepts and follow the standard guidelines; this ensures a safe pathway for these projects. In future also, all the projects will surely follow the Software Engineering concepts (may be with improved strategies and methodologies.)

Hyper Threading Technology

We all want our computers to be as speedy as they can be. There are many different ways to increase computer performance through different types of upgrades. Processors have become speedier because of demand and competition. To make processors fast, chipmakers have been creating new CPU architectures to process information and milk every ounce of processing power available. Intel created Hyper-Threading technology as an upgrade in CPU architecture and quietly integrated it into some of their processors for development and testing purposes.

It is based on the idea of simultaneous multi-threading technology (SMT), where multiple physical CPUs are used to process multiple threads at once. As an alternative to using multiple physical processors, Intel created multiple logical processors inside a single physical CPU. Intel recognized that CPUs are inherently inefficient and have lots of computing power that never gets used.

It allows multi-threaded software applications to execute threads in parallel. Consequently, resource utilization provides higher processing throughput. It is basically a more superior form of Super-threading that was first introduced on the Intel Xeon processors and was later added to Pentium 4 processors. This type of threading technology was not present in general-purpose microprocessors.

To boost performance, threading was allowed in the software by splitting instructions into multiple streams so that multiple processors could act upon them. By using this technology, processor-level threading can be utilized which provides more efficient use of resources for greater parallelism and improved performance on today’s multi-threaded software.

Hyper-Threading is a multi-threading technology in which SMT is achieved by duplicating the architectural state on each processor, while sharing one set of processor execution resources. It also produces faster response times for a multi-tasking workload environment. By permitting the processor to use on-die resources that would otherwise have been idle, it offers a performance boost on multi-threading and multi-tasking operations for the microarchitecture.

In a CPU, every clock cycle has the ability to do one or more operations at a time. One processor can only handle so much during an individual clock cycle. Hyper-Threading permits a single physical CPU to fool an operating system, capable of SMT operations, into thinking there are two processors.

It produces logical processors to handle multiple threads in the same time slice, where a single physical processor would normally only be able to handle a single operation. There are some prerequisites that must be satisfied before taking advantage of this technology. The first prerequisite is that you must have a Hyper-Threading enabled processor, HT enabled chipset, BIOS and operating system. Further, your operating system must support multiple threads. Finally, the number and types of applications being used make a difference on the increase in performance as well.

Hyper-Threading is a hardware upgrade that makes use of the wasted power of a CPU, but it also helps the operating system and applications to run more efficiently, to do more at once. There are millions of transistors inside a CPU that turn on and off to process commands.

By adding more transistors, chipmakers typically add more brute force computing power. More transistors equal a large CPU and more heat. The technology is aimed at increasing performance, without significantly increasing the number of transistors contained on the chip, making the CPU footprint smaller.

It offers two logical processors in one physical package. Each logical processor must share external resources like memory, hard disk, etc. and must also use the same physical processor for computations. The performance boost will not scale the same way as a true multiprocessor architecture, because of the shared nature of Hyper-Threading processors. System performance will be somewhere between that of a single CPU without Hyper-Threading and a multi-processor system with two comparable CPUs.

This technology is independent of any platform. Some applications are already multi-threaded and will automatically benefit from this technology. Multi-threaded applications take full benefits of the increased performance that this technology has to offer, permitting users to see immediate performance gains when multitasking. It also improves reaction and response time, and increased number of users a server can support. Today’s multi-processing software programs are compatible with Hyper-Threading technology enabled platforms, but further performance gains can only be realized by specifically tuning the software to utilize it. For future software optimization and business growth, this technology complements traditional multi-processing by providing additional headroom.

Reverse Engineering for Software Debugging

Reverse engineering in computer programming is a skill by which software can be reverted to its basic form, through a series of steps. The software is taken back to its source code level. Pretty often, software are not totally brought down to the source code level or simply cannot, but they are brought down till the assembly language level. Assembly language is a CPU understandable language which is different for different CPU architectures.

Assembly language has certain instructions known as assembly codes which define the flow of a program, the program structure, functions, etc. Everything that the software is capable of doing can be modified or deleted using these codes. Debugging is finding bugs in our software and correcting them, as and when necessary.

Debugging is most often done at development phase, which means when the software is being coded or developed. However, at times, some bugs and errors cannot be corrected at this phase. Some of these bugs can be identified and corrected when the concerned program’s source code is small but it becomes extremely difficult to correct bugs when the code is huge and complex. Reverse engineering can help programmers build better software by eliminating bugs by just understanding its techniques, procedures, and tools.

This process is not just about the bugs, but the entire aspect of developing software becomes absolutely crisp and perfect. Extensibility with the use of reverse engineering is also a major advantage, like we generally see patches being released by software companies for a security exploit or lack of required feature.

Today, many crackers are born on the information highway lanes who exploit and misuse technology. Crackers are people who reverse engineer software, not for the purpose of debugging but rather for breaking into it. They use its tools and techniques to hack authentication security mechanisms. Crackers steal passwords and patch software illegally, which they can automate by creating cracks. Cracks are small utility programs which are distributed across the Internet and emails, which help other people break security mechanisms of software with just a click of button, and without any prior knowledge.

Although this process has caused and continues to cause certain problems, but it is here to stay, to help and to build better software. As the old saying goes, “What’s good, is going to be broken!”, the only way out of the misuse of reverse engineering is to “outwit the cracker.”