The most common design trap is the Fan Trap that occurs when two ‘many-to-one’ joins have to follow each other in the master to details form and the query takes account of a measure from both leaf table and immediate master (Mortimer, 2014). Fan trap mostly occurs when a relationship is represented between two entity types, although the pathway that describes the connection between the entities is ambiguous in nature.
For example, if a single site contains many departments and employs over a number of staff, it will be ambiguous to make out which staff would work for which department. This ambiguity would be solved with the help of ER diagrams and the utilization of the Fan Trap. Having many one-to-many relationships does not always cause a Fan Trap although (Gould, 2015). There is however, now way a Fan Trap can be detected automatically and it would be needed to visually analyse the relationship between table and result in report. Therefore, it could be said that a Fan Trap representation is the most common type of example in case of a design trap in database.
2.Level of normalization is the most important for the Database design
Normalization is a process that organizes data in a database. This process includes the creation of tables and relationship establishing between different tables according to specific set rules designed for the protection of data as well as to make the database more flexible with the help of elimination of redundancy and inconsistent dependency.
Redundant data is responsible for wasting disk space and creating problems in maintenance (Hogan, 2018). If the existence of the data is found in more than one place, it needs to be changed. It is to be noted that the data must be changed correspondingly in all locations it has been found redundant. An address change for a customer is much easy to adapt if that data is stored only in the customer’s table in the database, unlike other locations in the database.
There are rules set for database normalization. Every rule is named as a normal form. According to the first rule, it can be seen that the database in first normal form. As with many formal rules and specifications, real-world circumstances do not always allow for perfect compliance. In general, normalisation requires additional tables, and some customers find this cumbersome (Coronel & Morris, 2016). If any of the rules are infringed of the first three rules of normalization, it needs to be taken into account that the application anticipates any problems that could occur, such as redundant data and inconsistent dependencies. Therefore, without normalization process, it is impossible to get rid of the redundant data, and data redundancy can be considered the biggest problem in a database. Most importantly, duplication of data gives in to faulty results, and faulty results have the potential to create havoc in any situation the database is being used.
3.Most complete SQL component language
SQL consists of three types of component languages. These are Data Definition Language or DDL, Data Manipulation Language or DML and Data Control Language or DCL.
The DDL enables creating and modifying tables and other objects in SQL. DDL language is used to define a table and its components and alter its functions. With the help of this language, a table in a database is structured (Aguilera, Leners & Walfish, 2016). In a DML, data manipulation is a table is enabled. This allows selection of a data, inserting a data, updating or deleting a data within a table. In a DCL, the SQL database gains user access so that further actions like defining a table or manipulating a table can be done seamlessly.
Therefore, with the above definitions, it can be easily identified that the complete SQL component language is the DCL (Balan, Hughes & Doucette, 2016). This is because it allows a user to gain access to a database so that the data could further be manipulated as per the user’s choice. On the other hand, DDL or DML could only perform their role in the specific databases. DCL performs both the tasks of a DDL and DML with user access and the capability to manipulate any data in any database.
4.Discuss the appropriateness of declaring an attribute that contains only digits as a character data type instead of a numeric data type
SQL server is avid to use varied types of data types, and it marks confusion at the time of implementation. Most of this confusion arises from choosing between these data types due to their limitations rather than their functionalities (García, Luengo & Herrera, 2016). Character data types are used to store values to evaluate mathematical equations albeit it contains numeric characters.
It could be noted that in some cases, using number sequences could be pointless when represented as numbers. For example, it could be seen that it is impossible to implement mathematical functions on a social security number or a phone number. It is at the moment that a user may want to use only digits as a character-data type in the place of using numeric data types (Andersen, Thomsen & Torp, 2018). Therefore, it would be appropriate at that time; it should be a little more cohesive to put forward the declaration of an element that contains only digits as a character data type in place of a numeric data type or else the entire operation in the database would be considered as pointless since the operations would not work appropriately.
5.Best database design to implement the top-down or the bottom-up approachat the time of modelling a database
The best database design to implement the top-down or the bottom-up approach at the time of modelling a database is the payroll management system. Payroll Management system reduces manual data entry and excess paperwork in case of handling payrolls. It manages all kinds of employee details to handle their payment, like, personal details, designation, company details, salary details, leave details and attendance details (Wilson et al., 2016). The DBMS is used for making the job of an employee as well as the job of an administrator much easier.
However, the more the difference between the high paid and the low paid salaries within the workforce becomes wider, statistics appear unreal. The top-down and bottom-up process is usually is usually used to forecast financial modules and payroll is also considered to be a financial module for an organization (Leis, Kemper & Neumann, 2016). A salient feature of the payroll management system is that it predicts the career graph of an employee in an organization and holds the record for every employee in the organization from the highest to the lowest levels. The top-down approach makes the payroll management system to assess the organization’s payment process as a whole. On the other hand, the bottom-down approach helps to maintain the personal data of each employee in the organization. In simple terms, top-down models start with the entire organization and work down, while bottom-up approach starts with the employees as a single entity and expands out. It is important to understand the individuality of both types of financial forecasting to determine which methodology is ideal for the specific needs of forecasting an employee’s salary at the end of the month.
6.Example of the possible consequences of the limitation that DBMS does not guarantee that the semantic meaning of the transaction truly represents the real-world event
In Database Management System, transaction can be defined as an extremely small unit of any program that may contain several low-level programs attached to it. It should maintain Atomicity, Consistency, Isolation and Durability (Ticehurst, 2017). It is only then that the database can have accuracy, data integrity and completeness. In a multi-transactional environment, however, there can be a possibility of serial schedules.
A database may have the possibility of being in several states. These may be active, committed, partially committed, aborted or failed. A database, however, does not guarantee that a transaction represents a real world event. This is because the database is generally made to verify the database commands providing syntactic accuracy that are provided by the users and that depend on the DBMS to be executed (Banks & Chitkara, 2016). The DBMS has the responsibility to look forthe availability of the database, it would also check the existence of the referenced attributes in the selected tables and it would also check if the attribute data-types are correct. However, DBMS offers no guarantee to the fact that the accurate transaction precisely represents the real-world event. This attribute is missing in case of a DBMS.
For instance, if end user sells ten units of product and inputs it as 100179, the DBMS fails to detect errors, for instance, the operator that has to enter ten units of product, that is a total of 100197. The DBMS will have the operation executed and the database would represent a technically consistent state that would be inconsistent in the real world as it is the wrong product that has been updated in the database.
7.Factors that should be kept in mind while writing conditional expressions in SQL code
Using simple literals and columns as operands: The use of conditional expressions with functions should be avoided. Comparing the contents that are there inside a single column to a literal is faster than comparing to expressions.
Numeric field comparisons: In the conditions where search needs to be done, comparison of a numeric attribute with the same of a literal is faster to compare in accordance to comparison of a character literal to a character attribute. Generally, the CPU works faster than character and date comparisons handles numeric comparisons like integers or decimal (Linoff, 2015). This is due to the indexes that do allow the references to be stored into the null values; additional processing of the NULL values are done and therefore the NULL values tend to be the slowest of all conditional operands. Equality comparisons therefore, are much faster than inequality comparisons.
Equalitycomparisons: These are processed faster than the inequality comparisons. However, if inequality symbol (>, >=, <, <=) is used, the DBMS performs additional processing for the request to be completed. Usually, there would be greater than or less thanvalues, therefore there would only be a few equal values in the index.
Equality conditions:These should be writtenfirst.This is because inequality conditions are slower in processing than equality condition. Although most RDBMSs will automatically process these, if attention is paid to this detail, it wouldmake less of a burdenfor the load for the query optimizer.
Using AND operator and false conditions:If an expression that is evaluated to be falsebecause of the multiple AND conditions, and if it is proved to be a fact, the conditioned will all be held as true in all manners (Darmont, 2017). If one of them is discarded as false cases, everything else will be naturally evaluated as false. As a result,if this techniquewereused, the DBMS would not necessarily let time be wasted to evaluate the conditions that are proved to be false. Therefore, usingthis technique would signify a necessary knowledge of the scarcity of the data set.
8.Important factors in the selection of a DBMS
Selection and implementation of a Database Management System are ought to be a complex process and involves implementing many workforces for specific utility at a time (SINGH & PANDEY, 2016). To make the process easier, few important factors should be kept in mind to successfully implement a DBMS system. These factors are listed in details as below:
Usability: The system should be checked for proper and simple usability. This would be done to imply the fact that every member of the organization involved in the process of the database management would find it user-friendly. These members can be either IT professionals or even people from the marketing background. Therefore, the ease of use for every user should be made compulsory in the DBMS software.
Reporting and Visualization: Ease of use is a compulsory feature for the DBMS software, but ease of visualization also serves for an added advantage. If the visualization of a database helps a user to easily identify the purpose of keeping the database, then it would be acceptable to the users and would be easy to report the results as well (Hira & Deshpande, 2018). Running queries for data, making selections and deciding segments becomes easier if the visualization of the database is appealing to the user end.
Security: Securing data is one of the main features of a database management system. It is essential that the data input in a DBMS be secured to the level of impermeable. It should always be kept in mind that confidential and sensitive data is protected to adhere to the business norms and protected from loss, mishandling and theft. Any data breach or malfunction would cost a business its reputation since business data is always under the threat of hacking.
Scalability, Cost and Suitability: Business data is alwaysprone to increase in size and an ideal DBMS should be flexible in size to hold the data that increases in size every passing day. Any implementation in the industry should be cost effective or the addition would only hamper the company economy, there ideally the DBMS should also save time as well as money. On the other hand, if the DBMS is suitable to the purpose of implementation, it should never be introduced in the organizational structure in the first place.
9.How to define multi-dimensional data analysis and explain its advantages to the users forselling a data warehouse idea
Multidimensional data analysis can be defined as data and information analysis using different relationships representing a dimension. This could be explained with an example. For instance, in a retail business, a data analyst would most probably want information about the sales of a product by its region, demographic distribution, quarter and others. This would be handled by a multidimensional data analysis system.
Data warehousing is handled by Multidimensional OLAP, which is commonly known as MOLAP (Silverman, 2018). MOLAP uses a multidimensional storage array for data viewing. The advantages of using MOLAP as a data warehousing idea is as described below:
Indexing pre-computerised and summarized data becomes the easiest task through MOLAP.
MOLAP aids a user to be connected to a large set of data to easily analyse a large amount of data including less defined ones.
The ease of use of MOLAP allows even the inexperienced users to have a seamless experience.
10.Opinion and example on Big Data
Big Data could be defined as a voluminous amount of data that could be both structured and unstructured, generated due to the daily usage of internet and holding the information of every user attending to the internet (Marz & Warren, 2015). The humongous amount of generated data is not a concern in this matter, but what really matters is the analysis of the data. The idea of Big Data is still vaguely coherent in human minds as it is not distinctive on how the analysis enables finding a common pattern in those generated data. It is, therefore, extremely tiresome to understand how an organization make use of the analysed big data for their benefit. The entire concept of big data stands on Volume, Velocity, Variety and Complexity. Data is collected based on these data and analysed accordingly. It is opined that Big Data finds a specific pattern in the data generated and provides information to organizations that analyse the data for their benefit. Big Data can provide an organization with the information like the causes of failure, issues and risks in almost the real-time, detecting fraudulent behaviour before the organization is affected by it.
Example of Big Data and its analysis a very natural phenomenon that is foundin the latest times. The pop-up ads seen every time a user surfs through the internet is an example of Big Data (John Walker, 2014). The data collected during the course of time the user is surfing through the internet, with every detail of the URLs, visited and the options made is analysed to present a similar point of interest that the user might give indulgence. This is only done to make sure of the user’s zone of surfing and common interests, which can benefit other companies providing the same products.
References
Aguilera, M. K., Leners, J., & Walfish, M. (2016). U.S. Patent No. 9,268,834. Washington, DC: U.S. Patent and Trademark Office.
Andersen, O., Thomsen, C., & Torp, K. (2018). SimpleETL: ETL Processing by Simple Specifications.
Balan, A. N., Hughes, R. L., & Doucette, M. (2016). U.S. Patent No. 9,372,671. Washington, DC: U.S. Patent and Trademark Office.
Banks, B., & Chitkara, R. (2016). U.S. Patent No. 9,330,276. Washington, DC: U.S. Patent and Trademark Office.
Coronel, C., & Morris, S. (2016). Database systems: design, implementation, & management. Cengage Learning.
Darmont, J. (2017). Database benchmarks. arXiv preprint arXiv:1701.08052.
García, S., Luengo, J., & Herrera, F. (2016). Data preprocessing in data mining. Springer.
Gould, H. (2015). Database Design and Implementation.
Hira, S., & Deshpande, P. S. (2018). Intelligent Multidimensional Modelling. GSTF Journal on Computing (JoC), 2(3).
Hogan, R. (2018). A Practical Guide to Database Design. Chapman and Hall/CRC.
John Walker, S. (2014). Big data: A revolution that will transform how we live, work, and think.
Leis, V., Kemper, A., & Neumann, T. (2016). Scaling HTM-supported database transactions to many cores. IEEE Transactions on Knowledge and Data Engineering, 28(2), 297-310.
Linoff, G. S. (2015). Data analysis using SQL and Excel. John Wiley & Sons.
Marz, N., & Warren, J. (2015). Big Data: Principles and best practices of scalable realtime data systems. Manning Publications Co..
Mortimer, A. J. (2014). Information structure design for databases: a practical guide to data modelling. Butterworth-Heinemann.
Patel, R. (2016). Payroll Management System.
Silverman, B. W. (2018). Density estimation for statistics and data analysis. Routledge.
SINGH, P., & PANDEY, N. K. (2016). Best selection of the DBMS by Multi-objective. optimization, 2(05).
Ticehurst, J. L. (2017). U.S. Patent Application No. 14/827,761.
Wilson, R. L., Anderson, R. J., Bartram, G. R., Patel, S. J., & Liu, T. S. K. (2016). U.S. Patent No. 9,442,628. Washington, DC: U.S. Patent and Trademark Office.
Essay Writing Service Features
Our Experience
No matter how complex your assignment is, we can find the right professional for your specific task. Contact Essay is an essay writing company that hires only the smartest minds to help you with your projects. Our expertise allows us to provide students with high-quality academic writing, editing & proofreading services.Free Features
Free revision policy
$10Free bibliography & reference
$8Free title page
$8Free formatting
$8How Our Essay Writing Service Works
First, you will need to complete an order form. It's not difficult but, in case there is anything you find not to be clear, you may always call us so that we can guide you through it. On the order form, you will need to include some basic information concerning your order: subject, topic, number of pages, etc. We also encourage our clients to upload any relevant information or sources that will help.
Complete the order formOnce we have all the information and instructions that we need, we select the most suitable writer for your assignment. While everything seems to be clear, the writer, who has complete knowledge of the subject, may need clarification from you. It is at that point that you would receive a call or email from us.
Writer’s assignmentAs soon as the writer has finished, it will be delivered both to the website and to your email address so that you will not miss it. If your deadline is close at hand, we will place a call to you to make sure that you receive the paper on time.
Completing the order and download