Amity Solved Assignment For Fundamentals of E-Commerce
Question 3. What is one of the benefits of layering to a complex system?
Layering is the construction of multiple applications on top of a common IT infrastructure. One of the benefits is that layers are functionally independent, which allows system developers to specialize in their application and make improvements without affecting the other applications or the underlying infrastructure
Interoperability - Layering promotes greater interoperability between devices from different manufacturers and even between different generations of the same type of device from the same manufacturer.
Greater Compatibility - One of the greatest of all of the benefits of using a hierarchal or layered approach to networking and communications protocols is the greater compatibility between devices, systems and networks that this delivers.
Better Flexibility - Layering and the greater compatibility that it delivers goes a long way to improving the flexibility; particularly in terms of options and choices, that network engineers and administrators alike crave so much.
Flexibility and Peace of Mind - Peace of mind in knowing that if worst comes to worst and a key core network device; suddenly and without prior warning decides to give up the ghost, you can rest assured that a replacement or temporary stand-by can be readily put to work with the highest degree of confidence that it will do the job.
Even though it may not be up to doing the job at the same speed it will still do it; at least, until a better, more permanent solution can be implemented. This is a state of affairs that is much more acceptable than for a lengthy cessation of network services or assets unavailability to occur. 80% is oh so much more pleasing than 0%.
Increased Life Expectancy - Increased product working life expectancies as backwards compatibility is made considerably easier. Devices from different technology generations can co-exist thus the older units do not get discarded immediately newer technologies are adopted.
Scalability - Experience has shown that a layered or hierarchal approach to networking protocol design and implementation scales better than the horizontal approach.
Mobility - Greater mobility is more readily delivered whenever we adopt the layered and segmented strategies into our architectural design
Value Added Features - It is far easier to incorporate and implement value added features into products or services when the entire system has been built on the use of a layered philosophy.
Cost Effective Quality - The layered approach has proven time and time again to be the most economical way of developing and implementing any system(s) be they small, simple, large or complex makes no difference.
This ease of development and implementation translates to greater efficiency and effectiveness which in turn translates into greater economic rationalization and cheaper products while not compromising quality.
Modularity - I am sure that you have come across plug-ins and add-ons. These are common and classical examples of the benefits to be derived from the use of a hierarchal (layered) approach to design.
Innate Plasticity - Layering allows for innate plasticity to be built into devices at all levels and stages from the get-go, to implementation, on through optimization and upgrade cycles throughout a component´s entire useful working lifecycle thereafter.
The Graduated, Blended Approach to Migration - Compatibility enables technologies to co-exist side-by-side which results in quicker uptake of newer technologies as the older asset investments can still continue to be productive. Thus migration to newer technologies and standards can be undertaken in stages or phases over a period of time. This is what is known as the graduated blended approach; which is the opposite of the sudden adoption approach.
Standardization and Certification - The layered approach to networking protocol specifications facilitates a more streamlined and simplified standardization and certification process; particularly from an "industry" point of view. This is due to the clearer and more distinct definition and demarcation of what functions occur at each layer when the layered approach is taken.
Task Segmentation - Breaking a large complex system into smaller more manageable subcomponents allows for easier development and implementation of new technologies; as well as facilitating human comprehension of what may be very diverse and complex systems.
Portability - Layered networking protocols are much easier to port from one system or architecture to another.
Compartmentalization of Functionality - The compartmentalization or layering of processes, procedures and communications functions gives developers the freedom to concentrate on a specific layer or specific functions within that layer´s realm of responsibility without the need for great concern or modification of any other layer.
Changes within one layer can be considered to be in self-contained isolation; functionally speaking, from the other layers. Modifications at one layer will not break or compound the other layers.
Side-Kicks - The development of "Helper" protocols or side-kicks is much easier when a layered approach to networking protocols is embraced. This is especially so when it comes to the development of "helper" protocols that are developed more or less as after-thoughts because the need arose.
Reduced Debugging Time - The time spent debugging can be greatly reduced as a direct result of taking the layered approach to developing network protocols because debugging is made easier and faster when using the layered approach as opposed to not using it.
Promotion of Multi-Vendor Development - Layering allows for a more precise identification and delineation of task, process and methodology. This permits a clearer definition of what needs to be done, where it needs to be done, when it needs to be done, how it needs to be done and what or who will do it. It is these factors that promote multi-vendor development through the standardization of networking components at both the hardware and software levels because of the clear and precise delineation of responsibilities that layering brings to the developers´ table.
Easier Binding Implementation - The principle of binding is far easier to implement in layered, tiered, and hierarchal systems. Humans also tend to understand this form easier than the flat model.
Enhanced Troubleshooting and Fault Identification - Troubleshooting and fault identification are made considerably easier thus resolution times are greatly reduced. Layering allows for examination in isolation of subcomponents as well as the whole.
Enhanced Communications Flow and Support - Adopting the layered approach allows for improved flow and support for communication between diverse systems, networks, hardware, software, and protocols.
Support for Disparate Hosts - Communications between disparate hosts is supported more or less seamlessly thus Unix, PC, MAC & Linux to name but a few can freely interchange data.
Reduction of the Domino Effect - Another very important advantage of a layered protocol system is that it helps to prevent changes in one layer from affecting other layers. This helps to expedite technology development.
Rapid Application Development (RAD) - Workloads can be evenly distributed which means that multiple activities can be conducted in parallel thereby reducing the time taken to develop, debug, optimize and package new technologies ready for production implementation.
Question 5. What is the most valuable function of the proxy server?
A proxy server has a large variety of potential purposes, including:
- To keep machines behind it anonymous (mainly for security).
- To speed up access to resources (using caching). Web proxies are commonly used to cache web pages from a web server.
- To apply access policy to network services or content, e.g. to block undesired sites.
- To log / audit usage, i.e. to provide company employee Internet usage reporting.
- To bypass security/ parental controls.
- To scan transmitted content for malware before delivery.
- To scan outbound content, e.g., for data leak protection.
- To circumvent regional restrictions.
Question 6. What is the purpose of the domain name system (DNS)?
The Domain Name System (DNS) is a hierarchical naming system built on a distributed database for computers, services, or any resource connected to the Internet or a private network. Most importantly, it translates domain names meaningful to humans into the numerical identifiers associated with networking equipment for the purpose of locating and addressing these devices worldwide.
An often-used analogy to explain the Domain Name System is that it serves as the phone book for the Internet by translating human-friendly computer hostnames into IP addresses. For example, the domain name www.example.com translates to the addresses 126.96.36.199 (IPv4) and 2620:0:2d0:200::10 (IPv6).
The Domain Name System makes it possible to assign domain names to groups of Internet resources and users in a meaningful way, independent of each entity´s physical location. Because of this, World Wide Web (WWW) hyperlinks and Internet contact information can remain consistent and constant even if the current Internet routing arrangements change or the participant uses a mobile device. Internet domain names are easier to remember than IP addresses such as
188.8.131.52 (IPv4) or 2001:db8:1f70::999:de8:7648:6e8 (IPv6). Users take advantage of this when they recite meaningful Uniform Resource Locators (URLs) and e-mail addresses without having to know how the computer actually locates them. The Domain Name System distributes the responsibility of assigning domain names and mapping those names to IP addresses by designating authoritative name servers for each domain. Authoritative name servers are assigned to be responsible for their particular domains, and in turn can assign other authoritative name servers for their sub-domains.
This mechanism has made the DNS distributed and fault tolerant and has helped avoid the need for a single central register to be continually consulted and updated. In general, the Domain Name System also stores other types of information, such as the list of mail servers that accept email for a given Internet domain. By providing a worldwide, distributed keyword-based redirection service, the Domain Name System is an essential component of the functionality of the Internet.
A DNS sever is where the computer goes to translate a web address that you type in into a series of numbers and goes to that address. So basically you type www.geekstogo.com into Internet Explorer (or any other web browser, it works in exactly the same way). The browser goes to a DNS server either you´ve specified or it has been given. It converts geekstogo.com into a series of numbers, in this case 184.108.40.206 and goes there. When you specify DNS servers in the fashion you have, this is the order they´re referred to when looking up IP addresses. Basically you go to a web site, the computer asks (in your case) the server at 220.127.116.11 for the proper number. If this server doesn´t give a number (for example because its overloaded with requests or offline or generally not working) then the computer will ask the server at 18.104.22.168 for the site´s IP. Then it just claims there is no page to find. You can add as many DNS servers as you like, the computer will just work its way down the list trying to find a requested site´s proper address before timing out. A common scenario when connected to a provider is that the provider is so busy with its user-base the DNS servers get overloaded. So you can connect but you can´t go anywhere. Name System, or DNS, makes browsing the Web simpler and more intuitive. It allows the tens of millions of computers connected to the Internet to find one another and communicate efficiently. DNS also allows individual nations to identify and optimize their websites for local populations, according to the Internet Corporation for Assigned Names and Numbers.
Hierarchies: Domain names are grouped into a series of top-level domains or TLDs such as .com, .net, .org and .gov. In addition, every country has its own TLD: for example, the TLD for the United States is ".us"; ".fr" represents France, ".in" denotes India, and so on. The TLD appears at the end of the full domain name.
The second-level domain contains the name of the website. For example, in "ehow.com", the second-level domain name is "ehow".The third-level domain, which appears at the beginning of some domain names, was used in the early days of the World Wide Web to signify that the domain was either a website (represented by ".www") or a file transfer site (".ftp").
The third-level domain is now used to signify any sub-domain, which is often just a sub-section of a particular website.
Convenience: Without DNS, people wishing to access a particular online resource would have to know the IP address or would be required to look it up. The IP address is a cumbersome series of three-digit numbers separated by dots or decimal points. The DNS system automatically converts these long numbers into convenient domain names that humans can easily use and remember.
Optimized Service: The top-level domain often indicates the nation of origin through a two-character abbreviation. The ability to recognized websites by country allows national registry operators to apply the best mix of linguistic and cultural policies for those domains, thereby optimizing websites for convenient access by users in each nation
Question 7. What do you understand by a digital signature? Explain it´s application and verification diagrammatically. Solve by www.solvezone.in contact for more detail.
A digital signature or digital signature scheme is a mathematical scheme for demonstrating the authenticity of a digital message or document. A valid digital signature gives a recipient reason to believe that the message was created by a known sender, and that it was not altered in transit. Digital signatures are commonly used for software distribution, financial transactions, and in other cases where it is important to detect forgery or tampering. Digital signatures are often used to implement electronic signatures, a broader term that refers to any electronic data that carries the intent of a signature, but not all electronic signatures use digital signatures.
In some countries, including the United States, India, and members of the European Union, electronic signatures have legal significance. However, laws concerning electronic signatures do not always make clear whether they are digital cryptographic signatures in the sense used here, leaving the legal definition, and so their importance, somewhat confused. Digital signatures employ a type of asymmetric cryptography. For messages sent through a non-secure channel, a properly implemented digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital signatures are equivalent to traditional handwritten signatures in many respects; properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes in the sense used here are cryptographically based, and must be implemented properly to be effective. Digital signatures can also provide non-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming their private key remains secret; further, some non-repudiation schemes offer a time stamp for the digital signature, so that even if the private key is exposed, the signature is valid nonetheless. Digitally signed messages may be anything representable as a bit string: examples include electronic mail, contracts, or a message sent via some other cryptographic protocol.
A digital signature
(Not to be confused with a digital certificate) is an electronic signature that can be used to authenticate the identity of the sender of a message or the signer of a document, and possibly to ensure that the original content of the message or document that has been sent is unchanged. Digital signatures are easily transportable, cannot be imitated by someone else, and can be automatically time-stamped. The ability to ensure that the original signed message arrived means that the sender cannot easily repudiate it later. A digital signature can be used with any kind of message, whether it is encrypted or not, simply so that the receiver can be sure of the sender´s identity and that the message arrived intact. A digital certificate contains the digital signature of the certificate-issuing authority so that anyone can verify that the certificate is real.
How It Works
Assume you were going to send the draft of a contract to your lawyer in another town. You want to give your lawyer the assurance that it was unchanged from what you sent and that it is really from you.
- You copy-and-paste the contract (it´s a short one!) into an e-mail note.
- Using special software, you obtain a message hash (mathematical summary) of the contract.
- You then use a private key that you have previously obtained from a public-private key authority to encrypt the hash.
- The encrypted hash becomes your digital signature of the message. (Note that it will be different each time you send a message.)
At the other end, your lawyer receives the message.
- To make sure it´s intact and from you, your lawyer makes a hash of the received message.
- Your lawyer then uses your public key to decrypt the message hash or summary.
- If the hashes match, the received message is valid
- Read the case study given below and answer the questions given at the end.
ABC Ltd is a manufacturer of mobile handsets. It has its manufacturing plant in Bangalore and its offices and retail outlets in different cities in India and abroad. The organization wants to have information systems connecting all the above facilities and also providing access to its suppliers as well as customers.
1 Discuss various issues in developing information systems and fulfilling information needs at different levels in the organization.
Information Systems (IS) is an academic/professional discipline bridging the business field and the well-defined computer science field that is evolving toward a new scientific area of study. An information systems discipline therefore is supported by the theoretical foundations of information and computations such that learned scholars have unique opportunities to explore the academics of various business models as well as related algorithmic processes within a computer science discipline. Typically, information systems or the more common legacy information systems include people, procedures, data, software, and hardware (by degree) that are used to gather and analyze digital information. Specifically computer-based information systems are complementary networks of hardware/software that people and organizations use to collect, filter, process, create, & distribute data (computing).Computer Information System(s) (CIS) is often a track within the computer science field studying computers and algorithmic processes, including their principles, their software & hardware designs, their applications, and their impact on society
Yes, there are many issues that would be faced while implementing and developing an information system. Some of the key points are:
- Integrating the system throughout the organization and yet serving specific needs
- Training managers and employees
- Managing the costs of information
- Managing user demands on the system
Among the most important are low productivity, a large number of failures, and an inadequate alignment of ISs with business needs. The first problem, low productivity, has been recognized in the term “software crisis”, as indicated by the development backlog and maintenance problems. Simply, demands for building new or improved ISs have increased faster than our ability to develop them. Some reasons are: the increasing cost of software development (especially when compared to the decreasing cost of hardware), the limited supply of personnel and funding, and only moderate productivity improvements.
Second, IS development (ISD) efforts have resulted in a large number of outright. These failures are sometimes due to economical mismatches, such as budget and schedule overruns, but surprisingly often due to poor product quality and insufficient user satisfaction. For example, one survey (Gladden 1982) estimates that 75% of IS developments undertaken are never completed, or the resulting system is never used. According to the Standish Group (1995) only 16% of all projects are delivered on time and within their budget. This study, conducted as a survey among 365 information technology managers, also reveals that 31% of ISD projects were canceled prior to completion and the majority, 53%, are completed but over budget and offer less functionality than originally specified. Unfortunately this area has not been studied in enough detail to find general reasons for failures. As a result, we must mostly rely on cases and reports on ISD failures.
Third, from the business point of view, there has been growing criticism of the poor alignment of ISs and business needs. While an increasing part of organizations’ resources are spent on recording, searching, refining and analyzing information, the link between ISs and organizational performance and strategies has been shown to be dubious. For example, most managers and users are still facing situations where they cannot get information they need to run their units. Hence, ISD is continually challenged by the dynamic nature of business together with the ways that business activities are organized and supported by ISs.
All the above problems are further aggravated by the increasing complexity and size of software products. Each generation has brought new application areas as well as extended functionality leading to larger systems, which are harder to design, construct and maintain. Moreover, because of a large number of new technical options and innovations available - like client/server architectures, object-oriented approaches, and electronic commerce - novel technical aspects are transforming the practice of ISD. All in all, it seems to be commonly recognized that ISD is not satisfying organizations’ needs, whether they are technical, economical, or behavioral. Consequently, companies world-wide are facing challenges in developing new strategies for ISD as well as in finding supporting tools and ways of working
- Explain different security threats in the context of e-commerce for the above company.
For ABC ltd, the vulnerability of a system exists at the entry and exit points within the system which can be classified as below:
- Shopper´ computer
- Network connection between shopper and Web site´s server
- Web site´s server
- Software vendor
Points the attacker can target
This section describes potential security attack methods that abc ltd could face from an attacker or hacker.
Some of the easiest and most profitable attacks are based on tricking the shopper, also known as social engineering techniques. These attacks involve surveillance of the shopper´s behavior, gathering information to use against the shopper. For example, a mother´s maiden name is a common challenge question used by numerous sites. If one of these sites is tricked into giving away a password once the challenge question is provided, then not only has this site been compromised, but it is also likely that the shopper used the same logon ID and password on other sites.
A common scenario is that the attacker calls the shopper, pretending to be a representative from a site visited, and extracts information. The attacker then calls a customer service representative at the site, posing as the shopper and providing personal information. The attacker then asks for the password to be reset to a specific value.
Another common form of social engineering attacks are phishing schemes. Typo pirates play on the names of famous sites to collect authentication and registration information. For example, http://www.ibm.com/shop is registered by the attacker as www.ibn.com/shop. A shopper mistypes and enters the illegitimate site and provides confidential information. Alternatively, the attacker sends emails spoofed to look like they came from legitimate sites. The link inside the email maps to a rogue site that collects the information.
Millions of computers are added to the Internet every month. Most users´ knowledge of security vulnerabilities of their systems is vague at best. Additionally, software and hardware vendors, in their quest to ensure that their products are easy to install, will ship products with security features disabled. In most cases, enabling security features requires a non-technical user to read manuals written for the technologist. The confused user does not attempt to enable the security features. This creates a treasure trove for attackers.
A popular technique for gaining entry into the shopper´s system is to use a tool, such as SATAN, to perform port scans on a computer that detect entry points into the machine. Based on the opened ports found, the attacker can use various techniques to gain entry into the user´s system. Upon entry, they scan your file system for personal information, such as passwords.
While software and hardware security solutions available protect the public´s systems, they are not silver bullets. A user that purchases firewall software to protect his computer may find there are conflicts with other software on his system. To resolve the conflict, the user disables enough capabilities to render the firewall software useless.
Sniffing the network
In this scheme, the attacker monitors the data between the shopper´s computer and the server. He collects data about the shopper or steals personal information, such as credit card numbers.
There are points in the network where this attack is more practical than others. If the attacker sits in the middle of the network, then within the scope of the Internet, this attack becomes impractical. A request from the client to the server computer is broken up into small pieces known as packets as it leaves the client´s computer and is reconstructed at the server. The packets of a request is sent through different routes. The attacker cannot access all the packets of a request and cannot decipher what message was sent.
Take the example of a shopper in Toronto purchasing goods from a store in Los Angeles. Some packets for a request are routed through New York, where others are routed through Chicago. A more practical location for this attack is near the shopper´s computer or the server. Wireless hubs make attacks on the shopper´s computer network the better choice because most wireless hubs are shipped with security features disabled. This allows an attacker to easily scan unencrypted traffic from the user´s computer.
Attacker sniffing the network between client and server
Another common attack is to guess a user´s password. This style of attack is manual or automated. Manual attacks are laborious, and only successful if the attacker knows something about the shopper. For example, if the shopper uses their child´s name as the password. Automated attacks have a higher likelihood of success, because the probability of guessing a user ID/password becomes more significant as the number of tries increases. Tools exist that use all the words in the dictionary to test user ID/password combinations, or that attack popular user ID/password combinations. The attacker can automate to go against multiple sites at one time.
Using denial of service attacks
The denial of service attack is one of the best examples of impacting site availability. It involves getting the server to perform a large number of mundane tasks, exceeding the capacity of the server to cope with any other task. For example, if everyone in a large meeting asks you your name all at once, and every time you answer, they ask you again. You have experienced a personal denial of service attack. To ask a computer its name, you use ping. You can use ping to build an effective DoS attack. The smart hacker gets the server to use more computational resources in processing the request than the adversary does in generating the request.
Distributed DoS is a type of attack used on popular sites, such as Yahoo!®. In this type of attack, the hacker infects computers on the Internet via a virus or other means. The infected computer becomes slaves to the hacker. The hacker controls them at a predetermined time to bombard the target server with useless, but intensive resource consuming requests. This attack not only causes the target site to experience problems, but also the entire Internet as the number of packets is routed via many different paths to the target.
Denial of service attacks
Using known server bugs
The attacker analyzes the site to find what types of software are used on the site. He then proceeds to find what patches were issued for the software. Additionally, he searches on how to exploit a system without the patch. He proceeds to try each of the exploits. The sophisticated attacker finds a weakness in a similar type of software, and tries to use that to exploit the system. This is a simple, but effective attack. With millions of servers online, what is the probability that a system administrator forgot to apply a patch?
Using server root exploits
Root exploits refer to techniques that gain super user access to the server. This is the most coveted type of exploit because the possibilities are limitless. When you attack a shopper or his computer, you can only affect one individual. With a root exploit, you gain control of the merchants and all the shoppers´ information on the site. There are two main types of root exploits: buffer overflow attacks and executing scripts against a server.
In a buffer overflow attack, the hacker takes advantage of specific type of computer program bug that involves the allocation of storage during program execution. The technique involves tricking the server into execute code written by the attacker.
The other technique uses knowledge of scripts that are executed by the server. This is easily and freely found in the programming guides for the server. The attacker tries to construct scripts in the URL of his browser to retrieve information from his server. This technique is frequently used when the attacker is trying to retrieve data from the server´s database.
Assignment - C
- The primary focus of most B2C applications is generating ____.
(d). Web Site
- Which is most significant for web based advertisers?
(b). Page Views
(c). Click Thoughts
- Digital products are particularly appealing for a company´s bottom line because of-
(a). The freedom from the law of diminishing returns
(b). The integration of the value chain.
(c). The increase in brand recognition.
(d). The changes they bring to the industry.
- The differences between B2B and B2C exchanges include
(a) Size of customer set
(b) Transaction volume
(c) Form of payment
(d) Level of customization on products/services
(A). a and b
(B). a, b, and c
(C). b and c
(D). All of the above
- What is the most significant part of e-commerce:
- Security-and-risk services include--
(a). Firewalls & policies for remote access
(b). Encryption and use of passwords
(c). Disaster planning and recovery
(d). All of the above
(e). a & b only
- Business Plans are important when trying to find capital to start up your new business. Important elements of a business plan include:
(a). Sales And Marketing
(b). Human resources handbook
(c). Business description
(d). a and c
- Based on the study, in the supply side initiatives, which of the following clusters was the only one found to be critical enterprise-wide?
(a). IT management
(c). Data management
- E-commerce increases competition by: erasing geographical boundaries, empowering customers and suppliers, commoditizing new products, etc. How do companies usually solve this problem?
(a). By competing on price
(b). By selling only through traditional channels.
(c). By lowering costs
(d). By creating attractive websites
- On which form of e-commerce does Dell Computer Corporation rely in conducting its business?
(d). None of the above
(e). All of the above
- What is the ´last mile´ in the last mile problem? The link between your...
(a). Computer and telephone
(b). Home and telephone provider´s local office
(c). Office and server
(d). Home and internet service provider
- Which of the following is a function of a proxy server?
(a). Maintaining a log of transactions
(b). Caching pages to reduce page load times
(c). Performing virus checks
(d). Forwarding transactions from a user to the appropriate server
- An example of the supply chain of commerce is:
(a). A company turns blocks of wood into pencils.
(b). A department supplies processed data to another department within a company.
(c). A consumer purchases canned vegetables at the store.
(d). None of the above
- Just after your customers have accepted your revolutionary new e-commerce idea, which of the following is not expected to immediately happen?
(a). Competitor catch-up moves
(c). First-mover expansion
(d). None of the above
- Which of the following statements about E-Commerce and E-Business is true?
(a). E-Commerce involves buying and selling over the internet while E-Business does not.
(b). E-Commerce is B2C (business to consumer) while E-Business is B2B (business to business).
(c). E-Business is a broader term that encompasses E-Commerce (buying and selling) as well as doing other forms of business over the internet.
(d). None of the above.
- Where do CGI (Common Gateway Interface) application programs or scripts run?
(a). On the client through a web browser
(b). On the client through temporary stored files
(c). On the web server
(d). Where the user installs them
(e). None of the above
- In which model the application logic is partitioned among the clients and multiple specialized servers?
- Two tier
- Three tier
- N tier
(c). 2 & 3
- What of the following are the 3 types of web information system logic?
(a). Presentation, business, information/data
(b). Presentation, information/data, active server pages
(c). Business, information/data, client/server
- Software, music, digitized images, electronic games, pornography can be revenue sources for the B2C e-commerce
(a). Selling services
(b). Doing customization
(c). Selling digital products
(d). Selling physical products
- What e-commerce category is the largest in terms of revenue?
(a). Business to Business (B2B)
(b). Intra-Business (B2E)
(c). Business to Consumer (B2C)
(d). Consumer to Consumer (C2C)
- An application layer protocol, such as FTP or HTTP, is transparent to the end user.
(d) None Of Above
- B2B & B2C IT initiatives can use the same E-Commerce platforms
(d) None Of Above
- B2B involves small, focused customer set with large transaction volume per customer, periodic consolidated payments and significant customizations of products and services
(d) None Of Above
- Two computers can communicate using different communication protocols.
(d) None Of Above
- Which is/are types of e - commerce?
(d). All the above
- Which of the following items is used to protect your computer from unwanted intruders?
(a). A cookie.
(b). A browser.
(c). A firewall.
(d). A server.
- For selling physical products on the Internet, what is the key to profitability?
(b). Cost Control
(c). Brand Recognition
- Which of the following B2C companies is the best example of achieving its financial success through controlling its cost?
(e). None of the above
- AsianAvenue.com, BlackVoices.com, iVillage.com, SeniorNet.org are all examples of what?
(a). Intermediary Services websites
(b). Physical Communities
(c). B2C websites
(d). Virtual Communities
- Which of the following is the least attractive product to sell online?
(a). Downloadable music
(c). A pda
(d). Electronic stock trading
- In the e-mail address email@example.com what is the top-level domain
- What do you think cookies do
(a). They are threat to privacy
(b). The help the user not to repeat some input info
(c). They personalize user´s webpage
(d). B and c
- Much of Amazon.com´s initial success can be attributed to which of the following:
(a). Low prices
(b). Brand recognition
(c). Fast web connections
(d). Customer service
- It is particularly difficult to maintain the competition advantage based on ________.
(d). Internal Cost Reduction
- What type of application has the potential to change a market or even create a new market?
(a). Software application
(b). Intelligent application
(c). Killer application
(d). Business application
- Why did the e-commerce boom, as evidenced by soaring stock prices of Internet businesses such as Pets.com and e Toys, went bust in 2000?
(a). Websites started by techies who lack business knowledge
(b). Lack of good business model
(c). Investors´ and entrepreneurs´ greed and ignorance
(d). All of the above
37.. Why can´t new connection infrastructure like DSL, Cable Modem, and fiber optics solve the last mile problem?
(d). All of the Above
- These are all the uses of plug-ins except?
(I). Air fresheners
- Speed up data transmission
III. Enhance browser capability
- To view different file types
(a) I and II
(b) III and IV
(d) I and IV
- A system with universally accepted standards for storing, retrieving, formatting, and displaying information in a networked environment best defines:
(a) A web site.
(b) A web location.
(c) The World Wide Web.
(d) An intranet.
- What´s the real potential of e-commerce?
(a) Making a profit
(b) Generating Revenue
(c) Improving efficiency
(d) Buying and selling on the internet and WWW