Wednesday, June 30, 2004

(.NET) Visual Studio 2005 Home: Microsoft Visual Studio 2005 Beta 1 Readme

The requirements for VS2005 are at high end. It requires a large Harddisk space. While VS2003 occupied arund 3GB with MSDN, this requires a minimum of 4.5 GB Harddisk.
When it comes to RAM, 256 is the recommended and as we know, recommended is always a must for many software. Lest the performancce degrades.

One thing is that this is having dofferent requirements for 32 bit computers and 64 Bit computers. for 64 Bit Computers, Win 2000 will not be suffice while they would do for 32 Bit.

With VS 2005, we have VB 2005, VC++ 2005, SQL server 2005, VC# 2005, VJ# 2005 and Visual Web Developer 2005.

As my main programming interface is with Visual Basic and Web, I will be concentrating on the same for now before exploring new domains.

More on this soon.

(.NET) Visual Studio 2005 Beta Home

Hey, for those out there waiting for the beta release of Microsoft's VS2005, here is the url for the Beta Version.

I am checking it out today and got to work on this to know the features.

Happy Programming...

Tuesday, June 29, 2004

Are You Talking To Me? A Microphone For Clearer Speech

Are You Talking To Me? A Microphone For Clearer Speech

This article from Microsoft Research explains all about the new Microphones in the market, the ones that help you in hearing and delivering speech with less disturbance.

(COM) Writing COM Objects with Scripting Languages

Writing COM Objects with Scripting Languages
Dino Esposito
Microsoft Corporation

November 1998

Note This article explores older scripting technologies. For information on new scripting technologies, please see Windows Script Components: They Get Around.
Summary: Compares the DHTML and XML scripting languages and discusses the pros and cons of building Component Object Model (COM) objects with each. (15 printed pages)

DHTML vs. XML Scriptlets
The XML Scriptlets Architecture
XML Scriptlet Files
Interface Handlers
The Automation Handler
XML Scriptlets and Windows Script Host
XML Compliance

Introduction
Technology is rapidly evolving, there's no doubt about it. If you want proof, however, consider this story: Once upon a time, about a year ago, little actors called scriptlets made their debut in the dynamic HTML (DHTML) theatre. They were HTML pages acting as real components. They exposed properties and methods. Scriptlets were also capable of bubbling system events and firing their own notifications. Judging from this, they were really a great and decisive step forward in the long run to componentize the Web. Due to scriptlets, writing reusable HTML code was no longer an issue. You could arrange an HTML component, such as a data-aware table, make it as parametric as possible, and use just a line of code to insert it in any page. Seemingly the only drawback to scriptlets was the cross-browser compatibility issue, because their use was limited to Microsoft Internet Explorer 4.0 and later software.

But as I said earlier, technology is rapidly evolving. So what yesterday were revolutionary and cutting-edge solutions are today going to be superseded by more general and powerful approaches. That's just what happened to scriptlets. Only a few months after their release—they originally shipped with Internet Explorer 4.0—scriptlets were renamed to the more specific Dynamic HTML scriptlets. The role of Dynamic HTML scriptlets was then reduced by an emerging new technology called XML scriptlets or, now, simply scriptlets.

As a result, today we have two possible choices when it comes to designing script components to be used in Web-based projects. The first is DHTML scriptlets, and the second is XML scriptlets. They aren't mutually exclusive, but each has its own independent and well-defined field of application. Furthermore, at least in principle, you can use both either on the client or the server side of the Web application.

(COM) The Basics of Programming Model Design

This is an old article from MSDN, that explains the basics of Model Design in Programming. A good Article for COM Designing. I was searching for one of this kind way back in 2000, but found this now accidentally, when I am into the COM. But, this is a good one for develpers at different stages.

Excerpt from the Article:
Every component developer has to design a programming model. When you write a Component Object Model (COM) control or dynamic-link library (DLL), you must decide how that component will be programmable or, in other words, how developers will write code to manipulate that component.

This usually opens up a whole set of questions: When should I use a property and when should I use a method? How should I name my properties and methods? How should I use and name enumerations? When should I raise events? Developers often can't find any answers to these questions and are left to figure out rules by examining existing published programming models. This can be quite hazardous, especially when the designers of those existing models were not operating by any logical rules either. This article presents some basic rules that I've learned by designing a number of programming models for Microsoft products. Any theory presented here was arrived at following lots of trial and error (sometimes more error than success!) and I have found that by following these rules, one can create usable, understandable and powerful programming models.

You may be confused by the term programming model, but it's really just the correct term for what most people call object model. By programming model I mean the set of interfaces, properties, methods, and events exposed from a component that allows a developer to write programs to manipulate it.

Designing a good programming model is just as important as designing a good user interface (UI) and, not so coincidentally, many of the principles used in UI design can be directly applied to programming model design. Good UI design attempts to let the user work at a much higher level of abstraction than the internal implementation. The UI presents a logical view of the functionality as opposed to the physical reality of that functionality. It expresses things in a way that matches how the intended user thinks—not necessarily how the system actually works.

In the same way, a good programming model doesn't just expose internal structures—it exposes its functionality at a higher level of abstraction so that the customer (the developer) can concentrate on what he or she wants to do and not on how to accomplish a simple task.

XML Developer Center: XML and the Database: XML to SQL: Using SQLXML Bulkload in .NET Framework

Amar Nalla showcases the Bulkload functionality available in SQLXML, which can be used to transform XML input into SQL data by building a .NET Framework-based application that shreds RSS feed data into a database.

XML has become the industry-wide standard for exchanging data across business systems. At the same time, the use of a relational backend as the data-store is well established. As developers, we are sometimes faced with the task of exposing existing relational data as XML to share it across multiple systems, or we are required to take input XML and shred the data into a relational database. Microsoft SQL Server 2000 has excellent built in support to achieve both tasks. In addition to the support inside the server, SQL Server 2000 has XML support at the middle-tier that can be used to perform the above tasks. The middle-tier based solution is enabled by using SQLXML. SQLXML 3.0, which is the current released version, is an add-on product that can be used along with SQL Server to enable extensive XML features. For an overview of the XML features provided by SQL Server 2000 and SQLXML, read Andrew Conrad's Survey of SQL Server 2000's XML Features.

Note This column is based on the SQL Server XML functionality that is available in SQLXML 3.0 SP2. It is the third major release in the SQLXML web release series. SQLXML periodically adds new XML functionality to SQL Server 2000, to keep pace with the fast changing world of XML and in response to customer requests. To download the latest Web release, or to find more information on the new features offered in the XML for SQL Server Web Releases, see the SQLXML Developer Center.
This article takes an in-depth look at the SQLXML Bulkload functionality, by building an application that takes an input XML stream and shreds the information into SQL Server relational tables. This article also tries to answer some of the common questions that arise when using Bulkload in an enterprise application. Specifically, the article cover the following:

Transforming a complex hierarchy of XML into a single table.
Using Bulkload in a .NET application.
Using Streams as input data mechanism for Bulkload in .NET

Saturday, June 26, 2004

(.NET - ASP.NET) Dynamically Adding Controls to a Windows Form

Another important issue in ASP.NET programming is to add controls dynamically. This feature provides gereater flexibility in reusing the same page for different purposes and present the same in different ways. It also Helps in hiding the code involved.

This article from ASPfree.com explains the same in detail with example.

Excerpt from the article:
For a simple and effective solution to writing an application that can be used by several different clients, look into Skousen's article here on defining attributes needed to create a form dynamically. He reduces the need to compile and to take extra Tylenol.

(.NET - ASP.NET) How to Play with DataGrid Control

Datagrid, one of the most frequently used controls in ASP.NET is also a mystique for many. This article explains in detail as to how to work on that and make use of its different properties.

The explanation is in detail with example that cvers all the possbile features of Datagrid, its events, and methods. Checkbox control , the one that we use for selecting rows is also explained.

DatePicker and Calendar are two controls, which when used with Datagrid become complicated. Unfortunately,these were not explained here. Had they been explained, the task would ahve been easy for many developers.

Infact, usage of Calendar control inside a datagrid is something where I do errors and I am still looking for proper guidance on that.

EXcerpt from the article:
This article provides a demo of how to divide up data into pages. Mayank Gupta illustrates how to automatically create pages containing the number of rows you require and how to make a custom interface.
Introduction

One example of using the DataGrid control to display data is the use of “paging”. When you have a large number of rows to display, sending them all to the client in one group at once doesn’t make sense. Your client will get impatient waiting for them all to arrive and may find that they actually wanted to see something else instead. To prevent this aggravation and waste of bandwidth, we actually divide the output into pages containing 10 –20 rows per page.

DataGrid web control makes it easy to provide a paging feature. It contains logic that can automatically create pages containing the number of rows you require, and it can render the navigation control in a range of ways. You can also take over paging entirely and implement all the features yourself to provide a custom interface.

(RSS Feeds) RSS 2.0 Specification

RSS 2.0 Specification

RSS Feeds have bwecome the most common things for every website, and they also help us in staying upto date on different topics.

Gone are the days when we used to worry about losing out info or important articles, thanks to these feeds.

This article from Harvard university discusses in detail about the architecture and the diferent ascpects of RSS 2.0.

(Database DB) Database Normalization

Put simply, normalization is an attempt to make sure you do not destroy true data or create false data in your database. Errors are avoided by representing a fact in the database one way, one time, and in one place. Duplicate data is a problem as old as data processing. Efficient and accurate data processing relies on the minimizing redundant data and maximizing data integrity. Normalization and the Normal Forms (NF) are efforts to achieve these two core objectives of data processing. This article will examine the concept of normalization in-depth.

Database design is typically the result of data modelling. To borrow from Joe Celkos "Data & Databases", Normalization encourages solid data modelling and seeks to structure the database in such a way that it correctly models reality and avoids anomalies. When we consider the basic way in which a user or application interacts with a database, we can conclude that there are three basic kinds of anomalies:

Insertion anomalies
Update anomalies
Deletion anomalies

Friday, June 25, 2004

(SQL Server) Writing Language-Portable Transact-SQL

Ken discusses the enhanced localization features in SQL Server 2000 that make it much easier to write Transact-SQL code that is portable across languages.

discuss a few of the issues facing developers who wish to write Transact-SQL code that is portable across languages. The introduction of Unicode data types, collations, and various other internationalization and localization features in SQL Server 7.0 has made writing language-portable Transact-SQL code much easier than in previous releases. SQL Server 2000 enhanced the internationalization features in SQL Server 7.0 by adding such things as column-level collations, so the tools available to you as a developer have simply gotten better over time.

Tuesday, June 08, 2004

(.NET) Web Forms State Management

For those who are intrested in managing the state of variables across different forms, this is an intresting article from MSDN...
Web Forms State Management

I have not yet tried on this, but came across it today. Will post more details later... after I try it.

Saturday, June 05, 2004

(eLearning) Scorm Standards - eLearning Industry

This is an extract taken from the SCORM section of ADLNET.ORG


SCORM Overview
The Sharable Content Object Reference Model (SCORM) aims to foster creation of reusable learning content as "instructional objects" within a common technical framework for computer and Web-based learning. SCORM describes that technical framework by providing a harmonized set of guidelines, specification and standards. Borrowing from work of other specification and standards bodies, ADL developed a model for creating and deploying e-Learning.

SCORM helps define the technical foundations of a Web-based learning environment. At its simplest, it is a model that reference a set of interrelated technical specifications and guidelines designed to meet high-level requirements for learning content and systems. SCORM describes a "Content Aggregation Model (CAM)" and "Run-Time Environment (RTE)" for learning objects to support adaptive presentation of content based on criteria such as learner objectives, preferences and performance.

SCORM targets the Web as a primary medium for delivering instruction. It does so under the assumption that anything that can be delivered by the Web can be easily used in other instructional settings that make fewer demands on accessibility and network communications. This strategy eliminates much of the development work once needed to adapt to the latest technology platform because the Web itself is becoming a universal delivery medium. By building upon existing Web standards and infrastructures, SCORM frees developers to focus on effective learning strategies.

The development of SCORM continues, even as the main medium it targets, the Web, continues to evolve and change. SCORM currently provides an Application Programming Interface (API) for communicating information about a learner’s interaction with content objects, a defined data model for representing this information, a content packaging specification that enables interoperability of learning content, a standard set of meta-data elements that can be used to describing learning content and a set of standard sequencing rules which can be applied to the organization of the learning content. While the technical standards used by the Web turn out to work equally well locally, regionally and globally, when it comes to the standardization of e-learning itself, the task of SCORM, is continuing to evolve.

First released in January 2000, the SCORM continues to update and expand the scope of the specifications through cooperation with industry, government and academic participants.

SCORM Today
SCORM is a collection of specifications and standards that can be viewed as separate "books" gathered together into a growing library. Nearly all of the specifications and guidelines are taken from other organizations. These technical "books" are presently grouped under three main topics: "Content Aggregation Model (CAM)," "Run-Time Environment (RTE)" and "Sequencing and Navigation (SN) (introduced in SCORM 2004)." Additional specifications are anticipated in future SCORM releases.

While the various SCORM books, focusing as they do on specific aspects of SCORM, are intended to stand alone, there are areas of overlap or mutual coverage. For instance, while the RTE book focuses primarily on communication between content and LMSs, it frequently refers to the types of content objects conducting that communication: Sharable Content Objects (SCOs). Their definition and the complete treatment of SCOs are found in the CAM book. Similarly, the Sequencing and Navigation book covers the details of SCORM sequencing and navigation processes to include detailed coverage of how an LMS evaluates navigation requests and related activities. The run-time navigation behavior maintains the possibility of reusing learning resources within multiple and different aggregation contexts. Thus by keeping the rules and navigation separate from and outside of content objects, the content may be reused in new and different ways to support many different instructional strategies.

The above defines the definition as well as the importance of SCORM Standards in the eLearning. With the emerging market for eLearning (LMS, LCMS, CMS, ...), the the importance and usage of SCORM is growing.
I had been working on SCORM for quite sometime. And with my experience, I can say that, though learning it or working on it looks easy , there is a lot to SCORM and much can be done using it as a standard. Infact, many of the firms that develop Content or who depend on eLearning are following SCORM Standards.

I am planning to write some reviews on existing tools that help in implementing Scorm Standards in an easy way.


Hi.. My First Blog...

Hi, I joined this Blogging Community recently and thought of saying my Hi, before I start Posting on anything else.

A Software Engineer by profession, with many activities that are not related to my profession, I am planning to post on different topics here. Gathering info and storing it is my passion and now I want to share it with others.

I hope to have an interactive communication with fellow bloggers out there.