Canaux

108413 éléments (108413 non lus) dans 10 canaux

Actualités Actualités
Hoax Hoax
Logiciels Logiciels
Sécurité Sécurité
Referencement Referencement

éléments par Scott M. Fulton, III

BetaNews.Com


Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/06/03/Microsoft__extends__Windows__What_does_that_mean_'

    Microsoft 'extends' Windows: What does that mean?

    Publié: juin 3, 2009, 8:17pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    This morning in What's Now | What's Next, we reported on the early word from a keynote address to the Computex trade show in Taiwan, from Microsoft Corporate Vice President Steven Guggenheimer. What might have been big news there was already leaked in advance: Windows 7 will be available to the public October 22. The #2 story was supposed to have been the company casting its net wider, making Windows available on a broader range of devices.

    Yet in Taiwan, where IT device production is shifting away from PCs and toward smaller, more customized devices, the question is just how broad that new range will be. The industry there (which locals refer to as "ICT" for "information and communications technology") has drawn a borderline around a concept called smartbooks -- devices whose blueprints can be assembled using pre-existing intellectual property that's licensed to vendors, typically using ARM processors. Meanwhile, Microsoft has drawn some borderlines of its own -- again -- by way of announcing that Windows may be addressing new market segments in the near future, extending its reach to new platforms. But now, there's dispute and confusion over whether the ICT industry's boundaries and Microsoft's have any overlap.

    This morning, Microsoft is saying a lot without actually answering that particular question directly. In a statement to Betanews, a Microsoft spokesperson said Guggenheimer's keynote centered around a concept called consumer Internet devices (CID), defined as components that are not necessarily portable, not necessarily PCs, but which are connected to the Internet and whose functionality depends on it. Hand-held GPS, PMPs, and set-top boxes all fall into this category. The spokesperson said that the Embedded Division's general manager, Kevin Dallas, joined Guggenheimer to discuss how Windows would "drive innovation" in this category.

    Meanwhile, at the very same time, Microsoft's corporate press office issued a statement saying Guggenheimer's keynote centered around a concept called ultra-low-cost PCs (ULCPC), a class of non-portable device that's connected to the Internet, and which may include such things as networked electronic picture frames and stationary e-mail and IM receivers. That statement started by referring to this segment as nettops, before mentioning that the more common name is ULCPC (if true, perhaps the first case in history where a catchy title is tossed aside for a five-letter abbreviation without a vowel).

    As it very likely turns out, Guggenheimer took time to reflect on both categories, and different departments of the company took home their slice of the pie that best suited their respective flavors. But are smartbooks anywhere in the vicinity of these categories that Microsoft will support? According to Reuters, in an exclusive interview, Guggenheimer directly answered no, saying, "For people who want a PC, albeit a different chipset, we don't think those will work very well." In other words, Guggenheimer repeated the Microsoft message that when folks want to do PC-style work, they prefer a PC-style computer, going on to suggest that with any other kind of platform, users wouldn't be guaranteed the use of their favorite software or their plug-in devices such as printers.

    So the discussion of "no Windows for smartbooks," while dominating much of the online traffic this morning, may not take into account a very important point: Windows Embedded CE is already one of the two dominant operating systems preferred for use on ARM processor-based components by ARM itself. When ARM executives brought up the topic some months ago of whether Microsoft should extend its support to ARM devices, they were talking about Windows 7 -- whether Microsoft should make a version of its desktop class PC operating system for ARM-based smartbooks, in light of engineers who successfully made Windows XP work there. The "no" answer from Guggenheimer appears to be an answer to that question.

    In the end, however, Windows 7 may not be best suited for such devices anyway, for reasons that Guggenheimer alluded to and which may be even more numerous. Windows Embedded CE is designed to be deployable on a small device such as an ARM-based component, in such a way that it receives only as many features and functions as is necessary to run the thing. Windows 7, meanwhile, could be transferred by way of hard drive from one PC to a completely different PC, and despite the product activation issues that would likely ensue, chances are that it would run. The desktop PC operating system contains so much more overhead than an ARM device would ever put to use.

    But by that same token, you could substitute "networked picture frame" in place of "ARM device" in that last sentence. You're not going to want your printer or Outlook or SharePoint Server running from, say, your refrigerator. Why didn't Guggenheimer's logic apply the same way to "nettop" devices as it did to "smartbook" devices?

    Does that mean there will be "no Windows for netbooks," as some press sources have extrapolated Guggenheimer's statement(s) to mean? Absolutely not. As the company made clear to Betanews last February, while there won't be something entitled "Windows 7 for Netbooks" or even "Windows 7 for ULCPCs," OEMs that produce what they call "netbooks" will be eligible to pre-install Windows 7 Starter Edition. That's the version whose three application maximum was axed last week; certainly there wouldn't be an unlimited number of apps running on Windows 7 on netbooks if Windows 7 couldn't run on netbooks.

    In the end, you do have to wonder if Microsoft is willing to extend its marketing umbrella for Windows to encompass one class of devices that's actually serviced by Windows Embedded, why not extend it to encompass another very similar class that's serviced by Windows Embedded, and that probably runs on the same processors? Once more, a Microsoft keynote address succeeds in substituting one set of questions with another.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/06/03/Opera_10_beta_sports_a_new_look__23%_boosted_performance'

    Opera 10 beta sports a new look, 23% boosted performance

    Publié: juin 3, 2009, 9:01am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Download Opera 10 for Windows Build 1551 Beta 1 from Fileforum now.

    Differences between the default skins in Opera 9.64 and the new Opera 10 beta.

    The developers at Opera Software have been publicly working with version 2.2 of the Presto rendering engine for its premier Web browser since last December. Their goal has been to implement Web fonts for Scalable Vector Graphics without sacrificing performance or other standards support. Conceivably, this could allow sites to deploy both TrueType and SVG fonts in user-scalable sizes scaled to fit the current window size, as this recent Opera test pattern demonstrates. (Right now, Firefox 3.5 Beta 4 supports some scalable TrueType, but not to the degree that Opera does.)

    It's an impressive renderer, and in early Betanews tests, it appears to be the engine of a snappy and stable browser. But Opera 10 may not have an opportunity to be released to the public before Mozilla cuts the ribbon on Firefox 3.5. When that day arrives, Opera 10 could flip-flop from being 28% ahead of Mozilla in overall performance to being 57% behind it, according to performance tests on a physical platform. More about those numbers later. In the meantime, Opera users will need to be satisfied with what is, without a doubt, a better Opera browser. The first Opera 10 Beta is now officially released, and right away, testers and Opera fans will notice a big difference: a new default skin, developed by new Senior Designer Jon Hicks.

    The new look lacks the distinctive glassy edginess that characterized Opera 9.5 and later editions, replacing it with a cooler, softer shade of grey that, while less original, may be easier for users to discern…maybe. The emblems on some buttons have changed, though their meanings are the same: For instance, the wrench-and-screwdriver icon from 9.5 that brought up the Panels bar along the left side (one of Opera's better contributions) is replaced with an icon that looks like a panel opening to the right. And the trash can icon that represented closed tabs that the user may restore, is replaced with a recycle-like arrow that rotates counter-clockwise, although the "Empty Trash" command is still available by that name from that icon.

    Tabs in the new default skin are rounded, and arguably look more like tabs. The big surprise -- and certainly Opera's latest submission to the "Now, Why Didn't We Think of That?" department -- is the vertically resizable tab bar. Dragging the grabber down reveals thumbnails of the latest snapshots of the visible open tabs.

    Thumbnails reveal themselves when you pull down the tab bar in the new Opera 10 beta.

    This is one of those "Aha!" features that could draw new users to Opera, at least until Firefox or another browser appropriates it. You don't have to reveal the entire thumbnail, so you don't have to consume too much space. Granted, as I've said before in our reviews of Mozilla's mobile browser experiment Fennec, thumbnails aren't always representative of their content. But they do represent multiple open pages more effectively than just their "favicon" icons (which disappear when you drag the tab bar down).

    Now, such a feature will probably preclude any type of add-on that enables tabs to appear in multiple rows. But that might be a fair tradeoff, and in the same vein as Microsoft's changes to the taskbar in Windows 7, Opera's new tab bar could start a fashion trend.

    A fully customized Opera 10 speed dial page.

    All of a sudden, Firefox is actually behind the ball with respect to an issue it was supposed to have owned: the contents of a newly created tab. While Firefox 3.5 Beta 4 introduces the public for the first time to "favorites" concepts Mozilla Labs began experimenting with last year, Opera is adding some intriguing new features to the Speed Dial feature it introduced in version 9.5. Now you can alter the size of the Speed Dial grid to as much as 5 x 5, and pull up a custom background from your hard drive.

    Next: Opera Software claims a 40% speed boost

    Download Opera 10 for Linux Build 1551 Beta 1 from Fileforum now.

    Download Opera 10 for Windows Build 1551 Beta 1 from Fileforum now.

    In its release announcement this morning, Opera Software claims that its first beta of version 10 is "40% faster than Opera 9.6." In Betanews tests of basic JavaScript and CSS rendering, we estimate the Opera 10 beta is a 22.6% better performer overall in Windows 7 than Opera 9.64, and 2% better than the Opera 10 Alpha public preview.

    Relative performance of Windows-based Web browsers, June 2, 2009.

    But there's been a lot of developments in the Web browser field in the last year, and although Opera 10's renderer is crisp and splendid in the early going, it does not appear that Opera's JavaScript engine will be enough to keep up with Google, Apple, and Mozilla in the new race for online efficiency. The latest Betanews performance tests, which include updated figures for daily development builds for Firefox 3.5 and 3.6, reveal a picture of a Google Chrome 3 browser that is more than twice as fast as Opera, and an Apple Safari 4 browser that (once the bugs are worked out with Windows 7) may be faster still.

    A word about our Windows Web browser test suite

    Tests conducted Tuesday afternoon give the Opera 10 Beta a 5.58 index score in Windows 7 RC -- meaning, 558% the performance of Microsoft Internet Explorer 7 in Windows Vista, a slow browser on a slow OS. On Vista, the Opera 10 Beta gets a 5.08. In speed tests alone (excluding standards compliance), the new beta shows almost exactly four times the speed of IE7 in Vista, and 185% the speed of IE8 in Vista. That makes the new beta a more capable browser than Firefox 3.0.10, Mozilla's latest stable release…but not for long.

    The latest build of what could become the release candidate for Firefox 3.5 scored an 8.77 in Windows 7 RC, and a 7.44 in Vista. That represents a speed boost of 24% from the new OS, versus the average of 12% and versus 17% for Opera 10 Beta. In a bit of a resuscitation for the "Minefield" development track, the private Firefox 3.6 Alpha scored a 9.10 in Windows 7, and a 7.54 in Vista.

    So Opera Software may want to consider a short beta cycle for version 10, and to keep the oven warm for a new version that will address what may inevitably be characterized as a real speed problem.

    Download Opera 10 for Linux Build 1551 Beta 1 from Fileforum now.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/06/02/Windows_7_to_be_released_October_22'

    Windows 7 to be released October 22

    Publié: juin 2, 2009, 8:33pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Breaking News

    The news comes in advance of comments being planned for the Computex conference in Taiwan early tomorrow morning, by Microsoft Corporate Vice President for OEMs Steve Guggenheimer. There he is scheduled to officially deliver the news that Windows 7 general availability worldwide will begin on Thursday, October 22.

    Microsoft's spokesperson gave Betanews a heads-up to expect comments from Guggenheimer concerning a program being called Windows Upgrade Option. That's precisely the title of an FAQ that was leaked to the public last month by the technology blog TechARP. That FAQ, which appeared to contain language directly from Microsoft, spoke about a low- or no-cost upgrade option for recent purchasers of consumer SKUs of Windows Vista.

    If we'll learn tomorrow what kind of discounts are being offered, it's very likely we'll also hear the final suggested retail pricing for all Windows 7 SKUs.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/06/02/Bing_vs._Google_face_off__round_2'

    Bing vs. Google face-off, round 2

    Publié: juin 2, 2009, 5:35pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    The way we left things yesterday, we gave Microsoft's newly revamped Bing search engine some moderately tough, everyday search tests, and gave Google the same treatment. After three heats, the score thus far is Bing 2, Google 1, with Bing performing quite admirably in the computer parts shopping department.

    Search engines are fairly good for finding something you know you're searching for. In the real world, folks don't often know what or who it is they're searching for, which is why they're searching for him. So suppose someone sends you out on the Internet to find...

    That guy from that old movie

    Actor Rod TaylorYou know the guy I'm talking about. What's-his-name. The guy with the big chin, from that movie you like that had the girl in it. Kind of rugged. Looks a little like Scott Bakula. Not William Holden.

    The most obvious deficiency search engines have today is that they gather no collective context about images, and people don't always remember names. With Google, Bing, and all the other major search engines, the only context their indexes can gather about images comes from the text in the immediate vicinity of the Web pages where those images reside. Now, hopefully those images have captions, and those captions include the basic information about who's in the photograph. That's helpful if you're specifically looking for a picture of, say, William Holden.

    But what about a fellow whose name not only escapes you, but one where the only information you have is given to you by someone else who's trying to remember the name? All you know is what they're giving you -- that guy from the 1960s who was the lead actor in something-or-rather, I think it's science fiction.

    For this test, we came up with a real-world-like query that may not be the most efficient, but one which a regular user is likely to enter: actor 1960s "science fiction" movie lead. If you search Bing's and Google's Images based on just something this general, you'll never find the guy, and you'll be there forever. Google will show you pictures featuring Burt Lancaster (did he ever do sci-fi?), Edward James Olmos (not 1960s), Keanu Reaves, some guy named Shatner, Sidney Poitier, Steven Spielberg, Harrison Ford, and "Susan" from Monsters vs. Aliens. And that's just among the entries that make sense; you'll also find this black-and-white photo of a 1960s mobile TV signal detector -- a giant radar dish that the British Government once used to detect unlicensed receivers of public TV signals. Interesting, but not even close.

    Meanwhile, Bing pulled up some movie posters featuring Jayne Mansfield (nice, but not close either), Clint Eastwood in the French edition of For a Few Dollars More, Gary Dourdan from CSI, the Grinch, and Godzilla. In a case such as this, you'd have to press your source for one more bit of information.

    So here's where we threw both Google and Bing a bone: Suppose your source tells you, "I think it was some bird movie." Now, a fan of great films of the 1960s wouldn't have to type anything more at this point -- she'd say, "Oh, you mean The Birds, the Hitchcock film with Tippi Hedren? You must mean Rod Taylor." Assuming you're not that lucky, or your memory for names is more like...well, mine, we'll throw in the term bird into our search query.

    Give Bing a clue as to what image you're looking for, and it will run with it...in a whole lot of directions.

    Bird is the term that should be the dead giveaway; it's the difference between a query that could be a one-in-a-million shot and one that should have a respectable chance of giving you a clue. It's with this addition that Bing pulled up a fan-made poster from the new Star Trek movie, Charles de Gaulle, Steve Martin from Dead Men Don't Wear Plaid, Johnny Depp, and whatever unfortunate pairing of persons appears in the fourth photo on the second row. If you scroll down this page, you'll be just as baffled with the likes of Edgar Allen Poe, a former friend of former New York governor Eliot Spitzer, Katie Couric, Shirley Temple, and a Coca-Cola bottle.

    Try the same search in Google Images, and you'll get ever-so-slightly closer results.

    Add the giveaway term to Google's query, and you'll see a few closer hits and some further-off misses. Doggone if that's not a clip from The Birds, first photo on the second row, albeit not with Rod Taylor. You'll also find a picture of a '60s sci-fi actor named Bruce Connor, whose obituary happens to share the same Web page as that of actress Suzanne Pleshette, who also starred in The Birds (and who remains greatly missed). And there's also the much-missed Ricardo Montalban ("Kh-a-a-a-n!"), the much-envied George Clooney, and a picture of a small duck probably taken sometime in the 1960s.

    If Tippi Hedren's face didn't clue you in, you probably wouldn't know to search further along that same thread to locate Rod Taylor. So it's at this point where we toss the query back to the textual search engine for any kind of help whatsoever. And it's here where you come to realize Google's true strength. Now, we've already determined that textual context used to sort photos is worthless on both counts. But both search engines should be capable of gleaning a collective context from a six-element query, rather than just throw pattern matches onto the screen like photos of ducks and Godzilla, to see what sticks. Item #1 in Google's search results is an AbsoluteAstronomy.com article about -- bingo! -- Rod Taylor.

    Meanwhile, with the very same query on Bing, Rod Taylor appeared nowhere within the first 150 results obtained. Let's face it, only Tom's Hardware readers are the sort who'll follow through to page 15 of anything online. Thus in this heat, it's one more point -- albeit a very small one -- for Google, bringing our score thus far to Bing 2, Google 2.


    KEEP SCORE ALONG WITH BETANEWS:


    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/29/On_second_thought__Microsoft_lifts_Windows_7_s_three_app_limit_for_netbooks'

    On second thought, Microsoft lifts Windows 7's three-app limit for netbooks

    Publié: mai 29, 2009, 11:41pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    If it's a counter that's determining arbitrarily how many applications your limited edition of Windows 7 should be allowed to run, how much precious system resources does that counter consume? And couldn't that memory and space be put to better use, say, running an app? Where and how should netbook manufacturers tell customers they can only run three Windows apps at a time? These were the kinds of questions Microsoft's engineers have been fielding with regard to a limitation in the company's forthcoming Windows 7 Starter Edition, a SKU of the operating system it wants netbook manufacturers to pre-install.

    In an indication this afternoon that all this listening to consumers' wishes may be giving Microsoft's people a headache, the company's Win7 evangelist Brandon LeBlanc announced this afternoon the addition to Starter Edition of a kind of feature, if not in fact the subtraction of a feature that nobody wanted: The three-app counter will be gone.

    Hiding his message in an announcement touting "worldwide availability," LeBlanc wrote, "We are...going to enable Windows 7 Starter customers the ability to run as many applications simultaneously as they would like, instead of being constricted to the 3 application limit that the previous Starter editions included. We believe these changes will make Windows 7 Starter an even more attractive option for customers who want a small notebook PC for very basic tasks, like browsing the Web, checking e-mail, and personal productivity."

    Even that's just three items, and users could certainly add more to that list, breaking the old barrier. To make sure users hold it down a bit after being thrown a big bone, LeBlanc added that the company will not relent in its decision not to add the Aero front-end, sound and graphics customization, Media Player streaming, or XP Mode to Starter Edition. Many netbooks will run on processors like Intel's Atom, whose integrated graphics would not be capable of rendering Aero anyway, and which does not support the hardware virtualization libraries necessary to run XP Mode.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/29/Top_10_Windows_7_Features__2__Device_Stage'

    Top 10 Windows 7 Features #2: Device Stage

    Publié: mai 29, 2009, 11:06pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Special Series banner

    Microsoft Windows 7 story background (200 px)If the strange feeling that Vista was less secure than XP was topmost on critics' gripe lists over the last three years -- regardless of the facts which contra-indicate that feeling -- running a close second was the feeling that very little, if anything, outside of the PC worked with Vista when you plugged it in.

    Here, the facts aren't all there to compensate for the feeling. Even in recent months, Palm Centro users complained about the lack of a Vista driver for connecting Centro to the PC outside of a very slow Bluetooth; Minolta scanner users were advised to hack their own .INF files with Notepad in order to get Vista to recognize their brands; and Canon digital camera owners are being told by that company's tech support staff that Microsoft was supposed to make the Vista drivers for their cameras, but didn't.

    What's going on here? Certainly, Microsoft shouldn't be responsible for producing drivers for every little thing that could fit the other side of a USB cable. On the other hand, if device manufacturers haven't reached that same conclusion, whose fault was it that they weren't steered in the right direction back while Vista was in beta?

    With Windows 7, Microsoft is making a concerted and earnest effort to avoid enabling a repeat performance. Its gamble is that Windows 7 can create an avenue, for the first time, for manufacturers of printers, cameras, smartphones, music players, and other devices to build completely customized extensions of the desktop. The inspiration was an idea presented internally by Microsoft engineers in prototype form a few years ago: You plug in a smartphone, and it appears on your desktop.

    That idea evolved somewhat into what Microsoft calls the Device Stage, with the notion that once you plug in a recognized device, something bigger and better than just an icon represents it on your Windows 7 desktop. You plug in your phone, and boom, there it is -- not the brand name of the manufacturer, but a real picture of the phone. (3D renderings of the device were considered as an option, I'm told.) From there, the device manufacturer is free to present a completely customized operations and management interface specifically geared around the device. Think of it like the device's "home page" -- it opens up, and you see clearly stated or rendered links to the things the device can do.

    The first public demonstration of Windows 7 Device Stage at PDC 2008.Since the Windows 7 taskbar itself has changed since that initial prototype was presented, the concept has evolved a bit, and in a sense simplified. Now when you plug in a recognized device, an icon for it does appear in the new taskbar, and it can be "pinned" there just like an application. Hovering over it brings up a frame that can serve as a status report for the device -- for instance, registering the battery level for a phone or a camera, or showing how many songs are loaded in an MP3 player and how many minutes or megabytes remain.

    Up to now, providing such a customized view of a plugged-in device's capabilities required the manufacturer to be willing to write the software pretty much from scratch, and then have that software behave nicely with its own device driver. Microsoft still needs its supporting device manufacturers to make some effort, but this time, it's using Web-oriented technologies to at least try to ease their burden.

    A simulation of Windows 7's Device Stage feature in action.What Microsoft is hoping manufacturers will embrace is a concept called the Device Stage metadata package. It's an XML-based file that represents all the visual elements that the user sees when he plugs in a device (or when Bluetooth captures it), and all the software components related to the functions that device performs, in an explicit database. Conceivably, that software can be installed automatically through the Internet as the device is plugged in (or comes in range), since the metadata package will contain instructions for how Win7 retrieves it, and how it should be registered locally.

    The metadata also links to pictures and other assets supplied by the manufacturer, so the user doesn't see the driver as something that "belongs" to Microsoft -- which may end up helping Microsoft just as well as the manufacturer.

    A custom function page designed for a Motorola phone, in Windows 7 Device Stage.

    The big goal there is to eliminate the need for the user to run a CD-ROM driver setup routine before Windows ever comes within a hundred feet of the device -- today, users who have opened boxes to find "Do Not Do Anything Else Until You Install This Software" warnings, are literally scared to trust Vista afterward. But there's a bonus which Microsoft is fully aware of but doesn't talk much about lest it cast a jinx: The fewer CD-ROM setup routines a user has to install, the less opportunities there are for installing bloatware and useless software that clogs up the operating system, that befuddles users with unwanted advertising, and that makes users blame Windows for their frustration. Device Stage metadata is only for providing device functionality, nothing more.

    Next: Rendering the Control Panel obsolete…

    Rendering the Control Panel obsolete One offshoot of the Device Stage that will make a positive impact on Windows 7 is the new "congregation area," if you will, for stuff that's plugged into the PC. It's the new Devices and Printers window, and it provides an alternate view of what Bluetooth and other device engineers are calling the "personal area network."

    The new Devices and Printers window in Windows 7.

    This window is effectively a list of everything that's attached to the operating system -- if Windows can see it, it's here. Everything appears once and once only, so a multi-function printer doesn't show up as a fax machine and as a telephone separately; and a personal media player doesn't show up as an MP3, a video player, a game machine, and a camera.

    Here also, in blatant contrast to how Windows has always worked, the PC itself is a device. Right-clicking on it brings up a popup menu of the typical functions associated with changing how it's functioning -- sound, mouse sensitivity, current language, keyboard options, ejecting an attached device. This doesn't replace the Control Panel, but a newcomer to Windows may very well find this way of setting preferences easier to comprehend.

    Using the metadata file, Microsoft was able to retrieve a picture of…well, a Microsoft-brand device from the Internet, and register that as the keyboard attached to our Windows 7 test system. But with very few exceptions from partners such as Canon, there are no other working examples of "staged" devices for Win7 RC users to test. For the meantime, Win7 is capable of providing alternative renderings; but in the screenshot above, that thing that looks like an external hard drive bears no resemblance at all to my BlackBerry 8830.

    So you can see the problem on the horizon. Device Stage could very well become an influential element for Windows 7 for either of two reasons: 1) It could dramatically improve the way users make sense of the things they can do with their devices once they're plugged in; or 2) it could drive home even further the reluctance of certain other manufacturers to cooperate, to even try to do their part to make their devices interoperable with the operating system that three out of five of their customers are likely to be using in 2011. How Windows 7 is perceived by the general public -- even by folks who use something else, like Mac OS -- may be determined by how well Device Stage attains its principal goal.

    Download Windows 7 Release Candidate 32-bit from Fileforum now.

    Download Windows 7 Release Candidate 64-bit from Fileforum now.


    FOLLOW THE WINDOWS 7 TOP 10 COUNTDOWN:


    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/29/How_sparse_is_US_rural_broadband__FCC_admits_it_doesn_t_know'

    How sparse is US rural broadband? FCC admits it doesn't know

    Publié: mai 29, 2009, 6:48pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    With a national plan for broadband Internet deployment due in just nine months, a report published Wednesday by FCC Commissioner Michael J. Copps -- still serving as Acting Chairman until the confirmation of Julius Genachowski gets back on track -- admits that the data on just how sparse broadband service is in the nation's rural areas has yet to be compiled. Less than a year before the deadline on action, the government literally doesn't know.

    "Our efforts to bring robust and affordable broadband to rural America begin with a simple question: What is the current state of broadband in rural America?" Commissioner Copps writes (PDF available here). "We would like to answer this question definitively, and detail where broadband facilities are deployed, their speeds, and the number of broadband subscribers throughout rural America. Regrettably, we cannot. The Commission and other federal agencies simply have not collected the comprehensive and reliable data needed to answer this question. As the Commission has indicated, more needs to be done to obtain an accurate picture of broadband deployment and usage in America, including its rural areas."

    A single footnote reference following this paragraph tells the rest of the story: The FCC issued a Notice of Proposed Rulemaking back in April 2007, which would have set the ball rolling for the data collection process to begin. Immediately thereafter, as per protocol, the US Government Accountability Office weighed in on what they perceived as the arcane methods of data collection and processing up to that time. For example, the GAO noted that certain areas on national maps were considered "covered" by broadband if so much as one subscriber within that area received service. And with ZIP codes determining coverage areas, the more rural an area gets, the broader the range of its ZIP code.

    Advocacy groups representing communities interested in building up the nation's broadband infrastructure cited the GAO's warning cry as vindication of what they'd been arguing since the turn of the decade. In response to that 2007 rulemaking proposal, the National Association of Telecommunications Officers and Advisors and the US Conference of Mayors jointly issued a response to the FCC, citing the GAO report and emphasizing that the data collection process needed to begin in earnest.

    Relative US coverage of broadband service providers for 2006

    This 2006 "coverage" map color-codes the ZIP code regions served by as many as seven broadband providers. However, the number of actual customers receiving service in many of these zones could be as few as one.

    "The Government Accounting Office openly criticized the current collection approach taken by the Commission, emphasizing in its report that 'the data may not provide a highly accurate depiction of local deployment of broadband infrastructures for residential service,'" their report to the FCC read (PDF available here). "The low standard by which a ZIP code is considered served as long as there is a single subscriber, along with a lack of detail with some statistics, raises doubts about the validity of the Commission's data -- data that claims broadband has reached 99% of the American population in 99% of ZIP codes...The need for a national broadband policy is clear, and local government national associations all support the development of such a policy. But before further progress can be made in that direction, one thing is clear: the need for more precise national broadband data."

    The GAO report also sparked a response from Commissioner Copps, who at the time was not acting as FCC Chairman: "If the Commission had prudently invested in better broadband data-gathering a decade ago, I believe we'd all be better off-- not just the government, but more importantly, consumers and industry. We'd have a better handle on how to fix the problem because we'd have a better understanding of the problem. We would already have granular data, reported by carriers, on the range of broadband speeds and prices that consumers in urban, suburban, exurban, rural and tribal areas currently face. We would know which factors -- like age, gender, education, race, income, disability status, and so forth -- most affect consumer broadband decisions. We would understand how various markets respond to numerous variables. We could already be using our section 706 reports to inform Congress and the country of the realities of the broadband world as the basis for charting, finally, a strategy for the ubiquitous penetration of truly competitive high-speed broadband. I don't believe we'd be 21st in the world had we gone down that road. But that was the road not taken."

    Now, over two years later, the Commission is no further along at gathering this data than it was in April 2007. But ironically, the man having to answer for that failure is the same man who called attention to it back then. So in an attempt to substitute what he called "a human face" for the lack of a clear nationwide picture, Comm. Copps cited a report from the Minority Media and Telecommunications Council (MMTC), concerning the state of broadband deployment in just one city: Weirwood, Virginia, population 1,174 (2000 census).

    As Copps relates in his Wednesday report, "Weirwood is an isolated rural community on Virginia's Eastern Shore, on the site of a former cotton plantation. Weirwood is only a mile and a half from U.S. Route 13, along which lies a broadband Internet backbone. The residents of Weirwood, however -- mostly African-American descendents of former slaves -- lack access to broadband. MMTC states that Weirwood has 'absolutely no ability to raise internally' the funds needed to build a broadband node to the community from the existing backbone line. Pending acquisition of thorough, reliable, and disaggregated data, we glimpse through Weirwood the state of broadband deployment in impoverished rural areas.

    "Even without detailed maps of broadband service availability, we know that Weirwood is not unique," Copps continues. "Whether we are discussing a historically African-American community like Weirwood, Tribal lands that even now lack access to voice telephone service, or individuals with disabilities whose access to broadband is essential, overall, there needs to be an active federal governmental role if all Americans are to have access to robust and affordable broadband services. The challenge we face is determining ways to adjust our efforts to ensure that the residents in places like Weirwood, or anywhere in rural America, are able to take advantage of the opportunities that come with broadband."

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/29/Google_s_move_to_introduce_a_Wave_of_synchronicity'

    Google's move to introduce a Wave of synchronicity

    Publié: mai 29, 2009, 12:40am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Google Wave logoIt's not unusual to see something emerging from Google's laboratories that folks in the general press fail to understand, and the company's marketing is partly to blame there. The public introduction during this morning's I/O Developers' conference of a Web programming construct called Google Wave generated headlines ranging in scope from a new competitor for Microsoft SharePoint, to a next generation social network, to a series of browser extensions for Chrome to rival the Mozilla Jetpack project, and finally to the company's evil plan to conquer and corrupt HTML 5.

    Excluding the latter, it could very well be all of these items. Essentially, Wave is an architecture, and not really a very new one. It's an old solution to a very old problem: that of synchronicity in distributed applications. As database architects know better than anyone, the problem with maintaining a distributed database is that multiple users may make changes that conflict with each other, leading to disparity and multiple versions. Currently, transactional modeling solves that problem, but a more direct and simpler approach from a mathematical standpoint would be simply to translate every operation, or every change a user requests to a database -- every command from client to server -- into a figurative mathematical language so that the terms of the command take into account the changes simultaneously being ordered by all the others.

    It's a simple concept on paper. Accomplishing it has been relatively impossible up to now, mainly because the speed and connectivity have not yet existed to deploy a transformational database matrix on a massive scale. But "Google" has come to mean massive scale, and now it's giving the concept a try.

    Operational transformation (OT) -- a way of rewriting every transaction to take into account all the others, prior to executing it -- could lead to some very sophisticated, connective applications. One way that it works is by upsetting the typical hierarchy of database architecture. Whereas typically you might think of a database as a thing in the core to which changes happen, OT reverses the concept by generating a kind of change model that bears a striking resemblance to a Feynman state-change diagram in quantum physics. Here, the database or "document" that ends up being the beneficiary of change, is used to represent the change itself, or what the architecture calls a wavelet. It then passes through the state change diagram like a blowing piece of paper pierces through a barbed-wire fence.

    "Under the basic theory of OT, a client can send operations sequentially to the server as quickly as it can. The server can do the same. This means the client and server can traverse through the state space via different OT paths to the same convergent state depending on when they receive the other parties operations," reads a Google white paper on OT. "When you have multiple clients connected to the server, every client and server pair have their own state space. One short coming of this is the server needs to carry a state space for every connected client which can be memory-intensive. In addition, this complicates the server algorithm by requiring it to convert clients' operations between state spaces."

    A demonstration of a connectivity application using Google Wave mounted (where else?) through Google Chrome.

    A demonstration of a connectivity application using Google Wave mounted (where else?) through Google Chrome.

    The difficulties in implementing OT described here could be shifted entirely to the server side, on Google's end. Developers, meanwhile, can simply concentrate on getting their signals across, sending messages that trigger their systems to traverse the "state spaces," without them having to know what a state space is. So one could take the code for Google Docs, let's say, and rewrite it so that multiple users are capable of editing a document concurrently. It requires a client/server architecture with a massive server scale, but Google may be the one to pull this off.

    It doesn't take a big leap of logic to take that same altered, synchronized word processor and convert it into an instant messaging app. If everyone's writing, drawing, and embedding links and images to the same file in real-time, then the document itself becomes a de facto messaging tool. Change the front end a bit -- or what open source developers call the "chrome" -- and you essentially have a better Google Chat than Google Chat, one where you can see what everyone is typing as they type it.

    As Google engineer Lars Rasmussen wrote this morning, "In Google Wave you create a wave and add people to it. Everyone on your wave can use richly formatted text, photos, gadgets, and even feeds from other sources on the Web. They can insert a reply or edit the wave directly. It's concurrent rich-text editing, where you see on your screen nearly instantly what your fellow collaborators are typing in your wave. That means Google Wave is just as well suited for quick messages as for persistent content -- it allows for both collaboration and communication. You can also use 'playback' to rewind the wave and see how it evolved."

    But that's just the front end, from the user's perspective -- more accurately, it's the first test application of the architecture Google is implementing. The architecture itself is the major undertaking here.

    Deployment of services will take place through a client-side gadget that communicates using a derivative of the existing OpenSocial API, coupled with server-side applications written in such common languages as Python and Java. So Google isn't re-inventing the wheel here either. But since Wave isn't really an app yet, and isn't a fully-fleshed out architecture yet, and doesn't have a complete platform yet, it may lack the impetus and momentum that .NET and Java have to enable good ideas to come to fruition. It does have the scale it needs, however, and that's the biggest ace in Google's hand right now.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/28/Web_browsers_get_a_12%_speed_boost_in_Windows_7_over_Vista'

    Web browsers get a 12% speed boost in Windows 7 over Vista

    Publié: mai 28, 2009, 9:59pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    If you've been testing the final Windows 7 Release Candidate on your own physical platforms, and you wonder what's giving you that feeling that it's just a bit peppier, a tad zippier, it's not an illusion. Betanews tests all this week, concluding today, comparing all the major stable release and development Windows-based Web browsers, running on exactly the same physical computer with fresh Windows Vista SP2 and Windows 7 RC partitions, confirmed what our eyes and gut feelings were telling us: On average, most browsers ran 11.9% faster in Windows 7 than on the same machine running Vista SP2, with most speed gains falling right around that mark.

    Internet Explorer 8, for example, runs 15% faster in Windows 7 than in Vista SP2, in multiple tests whose results were within one another by a hundredth of a point. Using our performance index as a guide, if you consider the relatively slow Internet Explorer 7 in Vista SP2 as a 1.00, then in a fresh test of IE8 on the same platform, the newer browser in Vista SP2 scored a 2.03 -- performing generally better than double its predecessor. But in Windows 7, the score for IE8 rises to a 2.27.

    A word about our Windows Web browser test suite

    In indications that Mozilla's developers may be testing their development builds on Windows 7, both of Firefox's private development channels show greater performance boosts from Win7 than for the current stable release and the current public beta. Firefox 3.0.10 enjoyed a nice shot in the arm with a 116% speed gain over Vista SP2, and an index score of 4.36 in Win7 versus 3.96 in Vista. Firefox 3.5 Beta 4 saw a speed gain at right about the average, 12%, with a Win7 score of 9.29 versus 8.49.

    But in tests we repeated just to make certain of the results -- again, with very minimal deviations, and some dead-on exact time values in the SunSpider when repeated -- yesterday's daily private developer build of Firefox 3.5 (once slated to be called "Beta 5," but which may be promoted) posted 28% better speed in Win7 than Vista, and the 3.6 Alpha 1 "Shiretoko" build was 30% faster in Win7. The 3.5 probable-RC scored a 9.20 in Win7 versus 7.62 in Vista; and 3.6 Alpha 1 scored a 9.10 versus 7.45.

    Today, we began testing the new Chrome 3 browser, Google's latest development channel build, as Chrome 2 proceeds to the "Stable" column and Chrome 1 is put out to pasture. We noted that Chrome 3 currently scores a 94% on the Acid3 test -- a setback from Chrome 2's 100% score which we can only assume has to do with something the Google developers are in the midst of testing. That slip up almost completely wiped out Chrome 3's faster rendering and cryptography benchmark index gains, with build 3.0.182.2 scoring a 12.24 in Vista SP2 against build 2.0.177.1's score of 12.23. But in Windows 7, Chrome 3 shows more improvement, indicating that even Google is taking apart Microsoft's new operating system in the labs. Chrome 3 was faster in Win7 by 16% compared to Chrome 2's 12%, and Chrome 3's index score in Win7 was 13.86 and for Chrome 2, 13.43.

    Relative Windows Web browser performance on physical Vista and Windows 7 platforms, as measured May 28, 2009.

    Opera's raw performance in our tests thus far continues to be unremarkable. The stable release version 9.64 benefits almost not at all from Windows 7 -- just 2% -- with an index score of 4.55 in Vista versus 4.51 in Win7. But the public Opera 10 Alpha build fared much better, gaining 15% more speed in Win7, and scoring a 5.47 there versus 5.03. The latest Opera snapshot build, which now includes the "Beta" graphics and a new look-and-feel, saw an 8% boost from Win7.

    The absolute shock of the day, however, comes from Apple. For reasons we can only surmise Apple's developers must be studying (if not, they should be), the latest Safari 4 Beta build 528.17 runs 22% slower in Windows 7 than in Vista SP2. In fact, Safari 4's index score slipped behind those of both Google Chrome 2 and 3. We reconditioned our test platforms twice just to verify, and once again, the variation was minimal and the results were confirmed: Safari 4 scored an 11.43 in Windows 7, versus that staggering 14.12 in Vista SP2. This while the stable Safari 3.2.3 enjoyed an 18% speed increase in Windows 7.

    Unexplained anomalies notwithstanding, the evidence is mounting that all browser developers will be receiving a gift from Microsoft, probably by the fall, in the form of 10% to 15% better performance without having to lift a finger.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/28/Chrome__Firefox__IE8_accelerate_12%_or_more_in_Windows_7_over_Vista'

    Chrome, Firefox, IE8 accelerate 12% or more in Windows 7 over Vista

    Publié: mai 28, 2009, 9:59pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    If you've been testing the final Windows 7 Release Candidate on your own physical platforms, and you wonder what's giving you that feeling that it's just a bit peppier, a tad zippier, it's not an illusion. Betanews tests all this week, concluding today, comparing all the major stable release and development Windows-based Web browsers, running on exactly the same physical computer with fresh Windows Vista SP2 and Windows 7 RC partitions, confirmed what our eyes and gut feelings were telling us: On average, most browsers ran 11.9% faster in Windows 7 than on the same machine running Vista SP2, with most speed gains falling right around that mark.

    Internet Explorer 8, for example, runs 15% faster in Windows 7 than in Vista SP2, in multiple tests whose results were within one another by a hundredth of a point. Using our performance index as a guide, if you consider the relatively slow Internet Explorer 7 in Vista SP2 as a 1.00, then in a fresh test of IE8 on the same platform, the newer browser in Vista SP2 scored a 2.03 -- performing generally better than double its predecessor. But in Windows 7, the score for IE8 rises to a 2.27.

    A word about our Windows Web browser test suite

    In indications that Mozilla's developers may be testing their development builds on Windows 7, both of Firefox's private development channels show greater performance boosts from Win7 than for the current stable release and the current public beta. Firefox 3.0.10 enjoyed a nice shot in the arm with a 116% speed gain over Vista SP2, and an index score of 4.36 in Win7 versus 3.96 in Vista. Firefox 3.5 Beta 4 saw a speed gain at right about the average, 12%, with a Win7 score of 9.29 versus 8.49.

    But in tests we repeated just to make certain of the results -- again, with very minimal deviations, and some dead-on exact time values in the SunSpider when repeated -- yesterday's daily private developer build of Firefox 3.5 (once slated to be called "Beta 5," but which may be promoted) posted 28% better speed in Win7 than Vista, and the 3.6 Alpha 1 "Shiretoko" build was 30% faster in Win7. The 3.5 probable-RC scored a 9.20 in Win7 versus 7.62 in Vista; and 3.6 Alpha 1 scored a 9.10 versus 7.45.

    Today, we began testing the new Chrome 3 browser, Google's latest development channel build, as Chrome 2 proceeds to the "Stable" column and Chrome 1 is put out to pasture. We noted that Chrome 3 currently scores a 94% on the Acid3 test -- a setback from Chrome 2's 100% score which we can only assume has to do with something the Google developers are in the midst of testing. That slip up almost completely wiped out Chrome 3's faster rendering and cryptography benchmark index gains, with build 3.0.182.2 scoring a 12.24 in Vista SP2 against build 2.0.177.1's score of 12.23. But in Windows 7, Chrome 3 shows more improvement, indicating that even Google is taking apart Microsoft's new operating system in the labs. Chrome 3 was faster in Win7 by 16% compared to Chrome 2's 12%, and Chrome 3's index score in Win7 was 13.86 and for Chrome 2, 13.43.

    Relative Windows Web browser performance on physical Vista and Windows 7 platforms, as measured May 28, 2009.

    Opera's raw performance in our tests thus far continues to be unremarkable. The stable release version 9.64 benefits almost not at all from Windows 7 -- just 2% -- with an index score of 4.55 in Vista versus 4.51 in Win7. But the public Opera 10 Alpha build fared much better, gaining 15% more speed in Win7, and scoring a 5.47 there versus 5.03. The latest Opera snapshot build, which now includes the "Beta" graphics and a new look-and-feel, saw an 8% boost from Win7.

    The absolute shock of the day, however, comes from Apple. For reasons we can only surmise Apple's developers must be studying (if not, they should be), the latest Safari 4 Beta build 528.17 runs 22% slower in Windows 7 than in Vista SP2. In fact, Safari 4's index score slipped behind those of both Google Chrome 2 and 3. We reconditioned our test platforms twice just to verify, and once again, the variation was minimal and the results were confirmed: Safari 4 scored an 11.43 in Windows 7, versus that staggering 14.12 in Vista SP2. This while the stable Safari 3.2.3 enjoyed an 18% speed increase in Windows 7.

    Unexplained anomalies notwithstanding, the evidence is mounting that all browser developers will be receiving a gift from Microsoft, probably by the fall, in the form of 10% to 15% better performance without having to lift a finger.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/28/How_soon_will_AOL_become_Google_s_prime_competitor_'

    How soon will AOL become Google's prime competitor?

    Publié: mai 28, 2009, 5:53pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    It's time to stop with all the "I told you so's" and the gloating and the self-congratulation, on the part of everyone (myself included) who never saw synergies between the former America Online and Time Warner, who are just as capable of reading the big, fluorescent handwriting on the wall as anyone else. We knew it wouldn't work. End of part one.

    Scott Fulton On Point badge (200 px)The task before Tim Armstrong -- minted as CEO of AOL in March -- and his team is to define the company. It has some very old parts and some very big assets, but other than that, it's a startup. If you "Google" Tim Armstrong (a number of ironies latent in that phrase), you discover almost instantly the type of independent CEO he will be. He helped build Google into the advertising sales giant it is today (its merger with DoubleClick notwithstanding), and he takes the knowledge of that blueprint with him to AOL. He is an ad man, and AOL will be an advertising platform.

    Before you come to the conclusion that AOL is already a dead hulk, keep in mind that the part people focus on the most is the part that faces the public most directly. That's the AOL "portal," the service that defined the original company and which used to garner subscribers. It's still a problem for AOL, but getting rid of Time Warner's baggage is the first step to resolving that problem.

    Behind the portal -- the part many folks don't appreciate -- is what is and may continue to be for some time the Internet's most effective advertising component, Platform-A. According to comScore rankings released last week, Platform-A continues to reach 91% of the US' Internet users -- meaning, sometime over the month of April, 9 out of 10 Americans were served an ad by Platform-A. Google's network (formerly considered DoubleClick's) reaches 85% of American users, and that number continues to grow. But the efficiency and versatility of Platform-A is now an established fact, and Google's gains will not result in losses for Platform-A in this department -- this isn't a market share metric.

    So if Platform-A were a rocket, we could say it can comfortably burn at 91%. But how much thrust does that deliver? That's been the problem of late, as indicated by AOL's own first quarter corporate report, shared with the SEC last month (PDF available here). Advertising revenue for AOL Network declined by 20% annually in the last quarter, for reasons the company described thus: "The decrease in display Advertising revenues generated on the AOL Network was primarily due to weakening global economic conditions, which contributed to lower demand from a number of advertiser categories and downward pricing pressure on advertising inventory. The decrease in paid-search Advertising revenues on the AOL Network, which are generated primarily through AOL's strategic relationship with Google, was attributable primarily to decreases in search query volume on certain AOL Network properties."

    That strategic relationship is now being rethought, as Google is selling is exercising its option to sell its 5% stake in AOL back to Time Warner prior to the spinoff. It isn't the reach of Platform-A that's been wearing down on AOL, if we read this correctly. It's that the effectiveness of "portals" -- the things that AOL said four years ago would constitute its primary product -- is declining, especially amid an Internet landscape that's being defined more and more by connectivity and social interlinking than by front ends and gateways.

    The big synergy play with AOL + Time Warner was the notion that content would flow freely from the New York + Hollywood juggernaut into the Dulles factory. But that synergy could only work if the principal dynamic of the Internet's evolution were to cease in its tracks: namely, that the Internet is changing the nature of content itself, forcing old New York and old Hollywood to reconstruct themselves in order to survive. That restricted the flow of oxygen to AOL's portal, as Time Warner became the biggest weight around its neck.

    As of the third quarter of 2009, that'll be gone. With a healthy platform still capable of generating the energy it needs to move, Tim Armstrong and company can now focus their attention on redefining the part of AOL that faces the public. It now has the opportunity to scrap everything we think of with regard to a "portal," and without Time Warner weighing down on it and without Google taking a piece of it, re-engineer that part of AOL that connects with his customer.

    And looking at Armstrong's record, does anyone have reason to believe now that he can't accomplish that?

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/27/Psystar_promises_bankruptcy_court_it_will_rethink_its_business_plan'

    Psystar promises bankruptcy court it will rethink its business plan

    Publié: mai 27, 2009, 8:49pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Psystar (square)It isn't Psystar's legal tangle with Apple Inc. that led to its Chapter 11 bankruptcy filing in a Florida federal court last Thursday. Rather, seven of the independent PC maker's top 10 creditors were credit card processing services, making up about 44% of its outstanding debt to those top 10 creditors. The IRS accounted for less than 5% of that debt. This according to court documents obtained by Betanews from the US Bankruptcy Court for Southern Florida.

    Although Psystar also didn't blame Apple in its petition for an emergency relief hearing the following day, it did mention the company as the developer of the operating system on which its Open-brand PCs are based. Instead, the story Psystar told is one that could apply to a thousand independent PC makers across the country, except for one important element: It's almost impossible for an OEM of Psystar's size to compete in the PC market on price alone, while still maintaining profitability.

    So it took a shot at developing a PC that could command a respectable premium -- something that distinguished it from its competition, enabling it to increase its margins. But in this market and this economy, the gamble hasn't paid off.

    "Debtor [Psystar] sales have been greatly affected by the decrease in consumer spending. The financial crisis has also caused creditors to tighten up their terms and become more demanding for immediate payment," last Thursday's petition reads. "Debtor's vendors due to their own financial problems are not being able to supply all necessary items to allow Debtor to produce their product, thus, forcing Debtor to pay higher prices for parts in order to fulfill customer orders in a timely manner and to assure satisfaction with the product. These factors seriously contribute to the Debtor not being able to turn a significant profit in each sale."

    Psystar's profits were "diminutive" during the bad economy, it goes on, with the hopes of a turnaround on the horizon. That hasn't happened, and while the company now seeks time and space to make a fresh start of things, its plan so far is to build again around its "valuable intellectual property" -- no doubt a reference to its ability to produce Mac work-alikes.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/27/6_Gbps_SATA_transfer_speed_is_on_its_way'

    6 Gbps SATA transfer speed is on its way

    Publié: mai 27, 2009, 6:30pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    The solid-state disk drive is supposed to be fast. After all, it's mostly made of memory -- and last we checked, flash RAM was fast. In practice, however, some applications with SSDs can be slower than with HDDs, the reason being the way data is cached as it's collected and moved through I/O channels into system RAM.

    The transfer interface is the bottleneck, and the engineers that contribute to the Serial ATA (SATA) transfer specification admit that fact openly. Just a few years ago, you might never have thought that 3 gigabits per second (Gbps) would end up causing problems; but as it turned out, the faster SATA 2.0 maximum transfer rate enabled new applications, which ended up introducing users to those bottlenecks for the first time.

    Now, the SATA-IO organization is preparing to eliminate that logjam, with the publication this morning of the SATA 3.0 specification. Its goal is to accelerate maximum transfer speeds to 6 Gbps, and in so doing, widen the bandwidth between components where these new bottlenecks have recently been introduced.

    "SSDs provide faster data access and are more robust and reliable than standard HDDs because they do not incur the latency associated with rotating media," states a recent SATA-IO white paper. "SSDs are used in a variety of applications but one of the most exciting is two-tier, hybrid drive systems for PCs. The SSD serves as short-term and immediate storage, leveraging its lower latency to speed boot time and disk heap access while a HDD, with its lower cost per megabyte, provides efficient long-term storage. With SATA 3 Gb/s, SSDs are already approaching the performance wall with sustained throughput rates of 250-260 MB/s [megabytes per second, note the capital "B"]. Next-generation SSDs are expected to require 6 Gb/s connectivity to allow networks to take full advantage of the higher levels of throughput these devices can achieve."

    The rapidly improved transfer rate may also increase not only the efficiency but also the lifespan of conventional hard drives, by introducing a concept called native command queueing (NCQ). With data transfer and data processing threads operating at roughly parallel speeds today, the only way existing HDD controller cards can synchronize these processes is by running them in sequence. That eliminates the opportunity controllers might have to read some data out of sequence (similar to the way Internet packets are received out of sequence) and assemble them later. By doubling the theoretical maximum throughput rate, HDDs can read more data from rotating cylinders along parallel tracks, without having to move the head...and that reduces wear on the drive head. Of course, a new generation of drive controllers will need to be created to take advantage of this capability.

    However, the best news of all is that a new generation of SATA cables does not have to be created. We can all use the cables we have now, to take advantage of the performance gains to come. Though the new controller cards will themselves be replacements, the SATA interface itself doesn't change to the extent that new cables are required. So existing external equipment, including the latest models in the exploding realm of external HDD storage, will be fully compatible.

    SATA-IO is making no promises as to how soon consumers will start seeing the new controllers, or PCs where those controllers are pre-installed.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/27/Top_10_Windows_7_Features__3__XP_Mode'

    Top 10 Windows 7 Features #3: XP Mode

    Publié: mai 27, 2009, 1:02am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Special Series banner

    Microsoft Windows 7 story background (200 px)

    In some ways, Steve Ballmer is proving to be a more capable Microsoft CEO than Bill Gates, especially recently. Whereas Gates' strategies have typically been associated with playing unfair, rewriting the rules, and being blatantly defiant about it in the process, Ballmer's strategy of taking away the argument -- eliminating the appearance of advantage and then still winning -- has been more effective, and more difficult to combat in both the marketplace and the courtroom.

    Nowhere does the "Playing Too Fair" strategy make a bigger display of itself in Microsoft's favor than in its latest permutation of virtualization technology -- a move that many individuals (myself included) directly suggested the company should do, and the company then did. Since 2004, Microsoft has offered a no-cost way for users to run Windows XP in a kind of hosted envelope, one which users were delighted to discover worked fairly well in Windows Vista. But it didn't offer any real advantages -- to use a program that relied on XP, you had to work within that envelope, using networking tools to associate two machines running on the same CPU.

    Meanwhile, business users were being offered an ingenious little tool (ingenious enough for some folks to infer automatically it wasn't Microsoft that created it) that extended the virtualization envelope to the main, physical desktop. SoftGrid let users run an application on a computer as though it were installed on that computer, without it actually being there -- it could be on a virtual machine, or on a virtual or physical system someplace else in the network. Users would not have to be informed of the difference.

    This is where I said, "You should build that feature into the client OS, so that if a program required XP, it would still run without the error." And Microsoft's folks responded, "Yea, that's a good idea." And where I thought it was filed away with all the other "good ideas" we've had over the years.

    But this one has come to fruition. It's not a completely integrated way of running XP alongside Win7, but if you think about it, perhaps it shouldn't be. Years ago prior to Vista, Microsoft's own staff members disputed with one another whether PowerShell should be part of the system. Its initial compromise -- distributing it as a free download, but not as part of the Vista package -- enabled spokespersons to continue saying PowerShell wasn't really a part of Vista. By that same logic, XP Mode is not a part of Windows 7. But there's no dispute going on among spokespersons today; XP Mode is being described as "a feature of Windows 7 Professional, Ultimate, Enterprise." That may mean it's downloadable separately for free, although the company is saying that it may come pre-installed by OEMs on new PCs.

    XP Mode, at first glance, looks like a slightly enhanced version of Virtual PC 2007 hosting an XP VM, dressed in the hue that Microsoft's Mark Russinovich has lovingly dubbed "Teletubbies Blue." Lurking beneath the surface here, however, is a vastly enhanced version of what VPC called "Virtual Machine Additions." With "Integration Features," there's a new, if limited, channel of communication between the Win7 host and the XP guest. One of the topics of discussion between these two parties is the changes that are made to XP's copy of the System Registry.

    Those changes are often made on account of programs being installed, so the new Windows Virtual PC hypervisor monitors for those changes and takes notes. It does this so it can add whatever programs you installed in the XP environment, in the Win7 start menu as well, along with the necessary commands to trigger running those programs through the VM.

    Windows XP Mode settings in Windows 7

    There's no instructions in the Windows 7 Release Candidate as yet for how to do this integration (documentation is often the last feature to be added to any software) but for once, that's not a big problem. It's very conceivable that any novice user can be instructed in this process, because at long last, Microsoft has avoided the usual tack of "wizardizing" its tasks in multiple-step, tunnel-like processes that draw out the simplest of functions into nightmarish, elongated marathon sessions. Instead, the company decided that a simple and direct, if not obvious, approach was warranted, and here it is: To install an XP program into your Windows 7 Start Menu, install it in the XP virtual machine first, and then exit the XP VM. Wait about 20 seconds.

    XP applications show up in Windows 7's Start MenuDuring that time, Windows Virtual PC processes all those notes it was taking, and creates duplicate entries for newly installed XP apps in a subfolder of the Windows Virtual PC menu entry, called Virtual Windows XP Applications. You can move the entries here elsewhere or pin them to the Start Menu just like any other.

    For our tests, we wanted to experiment with a program that technically should be capable of running in Vista, but just isn't. FrontPage 2003 is an orphaned component of Microsoft Office 2003 whose principal function appeared to be to mangle the World-Wide Web into a miasma of metacode with the objective of only being executable using Internet Explorer 4.0. There's only one reason why anyone would want to use it today, and it's a very legitimate reason: to rescue projects that businesses launched using FrontPage, thinking they would one day become profitable, so that their assets may be usable by more sensible applications.

    FrontPage's dislike of Vista has been demonstrated on numerous occasions; although Microsoft tried to address them with Service Packs, they were never successful. In unrelated tests years ago on my systems here, Office 2003 Service Pack 3 worked fine for every application except FP 2003. So it is amazing to see the program actually running, and in working order, within Windows 7 as if nothing had happened.

    Next: What does one do with XP-in-a-box?

    Of course, the real question becomes, what does one do with such an application once it's running? Like I said, the real purpose for running an app such as this, in a context such as this, doesn't have much to do with its original function in life. But what you can do is get at the data, and there's some tools here for getting that data out. For example, even though the virtual envelope's directories are local to its own file system ("My Documents," for example, belongs to the VM and not the physical machine), all of your physical hard drives are automatically given share names, so you can save or export material outside the envelope with confidence. Your VM's system Clipboard is shared with the physical one, so you can cut and paste between older and newer "rescue" documents.

    Printing is one other way you may be able to at least make archival records of older projects. Windows Virtual PC does not automatically install the equivalents for the printers recognized by Windows 7, so you have to install those same printers through the hypervisor first. Of course, you need the XP drivers for your printers -- Vista veterans will recall the nightmares regarding getting updated drivers for their perfectly functional devices, years now after Vista's launch. (Believe it or not, in our tests, Windows 7 RC recognized an Epson Stylus photo printer that Vista to this day still rejects.)

    Here is where we encountered some problems: With Windows Virtual PC, printers are supposed to be shared by means of a virtual USB connection, accessible from a menu bar command in the hypervisor -- something the user doesn't see if she's running a seamless app. So while printing from the virtual XP worked fine through the hypervisor window, it did not work from a seamless app window because the seamless XP guest does not include the hypervisor menu.

    So it's not "seamless" yet, and inevitably that will cause some problems with some users somewhere. Conceivably, Microsoft could come up with an XP-based utility that gives the user a way to connect the virtual USB cable, maybe with a keyboard shortcut. But at some point, someone will try scooting the XP seamless window to the top or the side, for Windows 7's new "Aero Snap" mode, and they'll notice it doesn't work the same way -- XP can't respond to events created by and for Win7. Similarly, a snapshot of the running XP app does not appear in the new taskbar when you hover over its icon -- it would require the hypervisor taking the snapshot on the app's behalf.

    A Windows XP application runs 'seamlessly' on Windows 7's desktop

    While it seems that these matters won't be significant, businesses that depend on "legacy" applications will have users who will inevitably call the support desk with these little surface issues. There will be complaints, and support staff will have to be ready for them, particularly because they are insignificant although the complainers themselves cannot be dismissed.

    But the fact that XP Mode exists at all is an indication of a startling realization: The disappointment of Vista made enough of a dent in Microsoft's mindset that it opened itself up to a very broad array of suggestions, most notably the ones that pointed out uncontrovertibly that the company already had the tools necessary to eliminate one of the biggest complaints customers have. It was just a matter of deciding to ship the thing. Though that decision took some time, it certainly appears to bear the Steve Ballmer signature.

    Download Windows 7 Release Candidate 32-bit from Fileforum now.

    Download Windows 7 Release Candidate 64-bit from Fileforum now.


    FOLLOW THE WINDOWS 7 TOP 10 COUNTDOWN:


    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/26/Vista_SP2__Windows_Server_2008_SP2_go_live'

    Vista SP2, Windows Server 2008 SP2 go live

    Publié: mai 26, 2009, 6:48pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Download Microsoft Windows Vista SP2 32-bit RTM from Fileforum now.

    Download Microsoft Windows Server 2008 SP2 32-bit RTM from Fileforum now.

    It's no longer a test: You can now apply Microsoft's complete updates to 32- and 64-bit versions of Windows Vista and Windows Server 2008, in the release-to-manufacturing form. If you've been testing the recent beta of Vista SP2, you will need to uninstall it first from the Programs and Features control panel. From the dialog box on your system, look for update number KB948465, choose that and click Uninstall.

    UPDATE: You could run into a little problem if you installed Vista SP2 and then tried out vLite as a way to slim it down. Apparently some testers have issued complaints or feedback notices, and Microsoft has just issued a bulletin addressing their concerns: Apparently if you removed components from Vista SP2 Beta using vLite, you can't then uninstall those same components again prior to installing the SP2 RTM.

    Internet Explorer 8 is not part of this Service Pack. That's not a change from before, though some individuals may rightly be skeptical. Following Microsoft's new policy regarding the marketing and distribution of its Web browser, IE8 is distributed separately.

    News of the Service Pack's RTM release comes as Microsoft found itself correcting a holiday blunder, after having issued a bulletin that appeared in testers' inboxes saying that the Windows 7 beta program ends June 1. That's not quite right, as Brandon LeBlanc found himself stating very early this morning: On July 1, not June, the bi-hourly shutdown process for Windows 7 Beta 1 will begin -- meaning, each session will start timing itself for a two-hour maximum. Beta 1 will start completely shutting down on August 1. However, that's for Beta 1, not for the Windows 7 RC. That more updated build (7100) is still being distributed, and it will not start terminating itself in August like Beta 1.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/26/To_Bing_or_not_to_Bing_'

    To Bing or not to Bing?

    Publié: mai 26, 2009, 5:51pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Focusing on Microsoft's dilemma over how it can compete against Google in a market that Google now solidly owns, blinds one to the bigger problem facing anyone trying to do business on the Internet today, including Microsoft: No one really has a clue as to how the damned thing works.

    Arguably, Google may be closer to discovering the clue than anyone. But its clever marketing tactics, which lead the technology press to cover color changes to the Gmail toolbar and the shifting of department names from the bottom to the right side of the corporate logo as strike-up-the-band events, indicate to me that Google is just as indecisive about a viable long-term business plan as everyone else. It's just better at masking that fact.

    Google has built its empire on cleverness. Its very name is symbolic of its approach to business from its outset: Throw all available resources at working the problem, until a solution eventually sprouts forth someplace. It isn't an efficient approach to business, but like a password-cracking algorithm, it doesn't have to be. Besides, who else is going to not only come up with a better solution but fund it, manage it, advertise it, and nurture it to health?

    Cleverness shrinks in the face of genius. Since the advent of the Web, there's been a palpable anxiety over whether someone in his basement laboratory would stumble upon (to borrow a phrase from a former Google competitor) a realistic formula for long-term business success, that would trump everyone else's clever ploys. It's like the Grand Unified Field Theory -- it's a formula that people believe exists out there in the ether, unwritten. Except that in this case, it's probably simpler, and that gives people cause for concern.

    You see, the big problem with the Internet as a medium is that it really hasn't been designed yet. No one knows a way to build a service that is useful and reliable that people will continue to need and want for the foreseeable future, and build a brand around that service. In lieu of a real formula, publishers have come to rely upon the fickle finger of Google for their livelihoods, to steer some fraction of this nebulous mass of traffic in their general direction. And since Google itself doesn't really have a programmed technique for executing this primary business function of the Internet, the task of guessing how Google obtains prime placement for search queries and high-level placement for stories in Google News, or how it obtains the relative value of phrases in AdWords, has become a cottage industry.

    It is this field of uncertainty, of not knowing how this thing really works, that gives Google its power. Through no fault of its own, Google is a socialist empire. It thrives upon the equal distribution of resources among its vast multitude of loyal citizens. The New York Times clamors for space alongside The Boy Genius Report in the daily war for territory on Google News, the modern era's Pravda. (This while the real Pravda competes for aggregator space too, with headlines like this from today: "Dog gives birth to mutant creature that resembles human being.") It sets the value of terms, phrases, and concepts on a scale that masquerades as an open market, but whose own customers are never given a clear picture of their dynamics. And it espouses a dogma of equitability that's soothing and appealing to a populace in the throes of a revolution.

    But the Internet revolution is ending, and everyone knows it. In the absence of revolution, it takes more than a clever messaging authority to lead an empire. It's time for a capitalist approach, and who knows where that will come from?

    Meanwhile, journalists who have become conditioned to covering the minutiae of search engine marketing and Internet portals, are buzzing this morning about Microsoft probably shifting its emphasis from the meaningless "Kumo" domain name to the meaningless "Bing" domain name. And while "Bing" does have the virtue of sounding more like a viable verb -- I can't imagine myself ever Kumo-ing the nearest dry cleaners -- focusing on the branding problem steers attention (perhaps intentionally) away from the bigger issue: The next great search engine must efficiently lead the seeker directly to what she's searching for, must lead demand to supply, must direct client to server. It must be capable of doing what Google, even throwing more servers into its infinite mix, has cleverly avoided ever having to accomplish: simply answering the question.

    It could, if it were engineered to do so, not with brute force but with logic. There's a feeling out there that whoever gets the logic gains the keys to the kingdom.

    That's why there's more than a pinch of tension and anxiety over the Wolfram Alpha project: because folks know that Google's brute-force approach to connecting a question to a possible answer could someday be outmoded by a brilliant mathematical solution. If a search engine could efficiently answer the query, "I'm five minutes late to an appointment, what's the quickest route to the venue that passes by a store where I can replace my necktie?" then not only would users flock to the service, but (far, far more importantly) businesses would pay good money to be part of the answer to that question.

    Newspapers that are suffering today because they can't transfer brand loyalty to the Internet, and which today depend on gaming Google with superlative headlines to get them by from day to day, would probably gamble a great deal of money toward funding a solution that's more than clever, but genius.

    Which brings us back to Microsoft, and the giggles, sneers, and coughs generated whenever the term "genius" is brought up in association or mere juxtaposition with the manufacturer of the Zune. It's never had to display real genius in order to succeed, which could by why many believe that Microsoft could yet be clever -- that it could convince the market that it's closer to a real solution, a real business model, than Google or anyone else.

    The reason I doubt it -- at least today -- is the same reason I doubt Google has a mission plan when its blogs boast of the shifting of the words in its logo. When we're too easily led to focusing on the minutiae of the matter -- whether to call it Kumo or Bing or Splong or Fyadqorst -- it's probable that there is no big picture for us to be losing sight of. We're discussing no less than the invention of the wheel, and as Angela Gunn's favorite author Douglas Adams put it in a true work of genius, we're arguing over what color it should be.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/23/Google_Chrome_2_is_20%_faster_than_Chrome_1_in_physical_speed_tests'

    Google Chrome 2 is 20% faster than Chrome 1 in physical speed tests

    Publié: mai 23, 2009, 4:34am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Test Results

    Yesterday, Google traded development track 1 of its Chrome Web browser for track 2, making the latter effectively the "stable" edition of the browser, even though it's still officially under development and not yet feature-complete. Many users of version 1 found themselves automatically upgraded to version 2, and may very well have noticed a subsequent speed increase from the JavaScript interpreter.

    In a blog post yesterday, Google said that speed increase would be about 30%. But is that an accurate assessment, especially given that Google's V8 JavaScript benchmark was devised by Google to test its V8 JavaScript interpreter?

    On Betanews' new physical test platform for Windows-based Web browsers and operating systems, whose construction was completed Friday, our latest tests show that Chrome 2.0.177.1 was 20.4% faster than Chrome 1.0.154.65, in independent benchmarks other than V8 related specifically to speed. Our adjusted performance score for Chrome 2 on our new platform was 21.4% better than Chrome 1, relative to the performance of Microsoft Internet Explorer 7 on the same system.

    For the past few months, we'd been testing Web browser performance on easy-to-manage virtual machines. Our move to a physical platform did change our index numbers with respect to Windows Vista, but it changed them fairly proportionately to one another.

    Microsoft Internet Explorer 7 ran a little faster in Windows Vista than the IE7 we measured on a virtual system. That should be no surprise to anybody. Naturally, nothing is slower than IE7, which is why we still use it as our index system. But as we verified by repeating the test circumstances from scratch, IE7's speed did jump more than the other browsers in our test.

    But not by much. While our initial performance index score for Internet Explorer 8 showed it easily doubling IE7 with room to spare, our reset score was still double that of IE7, with about 215% the speed of its predecessor. IE8's physical index score is 2.03, reflecting a much better SunSpider score for IE7 on the physical platform -- its string processing score is especially higher.

    All the other browsers in our test are, as we said, proportionately lower, but with variations less-than-slight enough to justify our having moved our test bed to a physical platform. In our latest tests, Apple's Safari 4 beta registered a score of 14.12 -- meaning Apple's latest browser performs over 14 times better than Microsoft's earlier browser. Divide that score by 2.03, and you'll see how much better performer Safari 4 is than IE8.

    Relative Windows Web browser performance on a physical Vista platform, as measured May 22, 2009.

    Google is following close behind in the race for browser performance, with an index score of 12.23. Let's break that score down a bit: Version 2 scored a 3.28 against IE7 in the HowToCreate.co.uk CSS rendering test, with the JavaScript timer adjusted to compensate for browsers (especially Safari) that fire the onLoad event differently. That's a test that loads a page and then renders a cavalcade of successive blocks of CSS. Since the scores for those renderings vary from the very first one down the line, we take that test five times and average their times together. Chrome 1 scored a 2.83 in that test, with IE8 scoring 1.99 and the Safari 4 beta a stunning 5.98. Even after compensating for what some call an anomaly and others a "cheat," Safari 4's score remains outstanding.

    Chrome 2's Celtic Kane basic JavaScript score remains high at 4.83, but Safari 4's score here remains incredible at 7.68. Where Chrome's performance pulls up closest to Safari's is in the SunSpider test -- a 32.48 versus a 34.50. The big reason for these huge scores has to do with string processing -- how long it takes to process text in memory. IE7 is notoriously slow at this, but IE8's score in string processing of 12.74 indicates that IE8 is almost 13 times faster than its predecessor at handling text.

    After restarting our test matrix fresh on a physical Vista platform, we're adjusting the scores for Firefox browsers to account for the faster IE7. But our latest battery of tests still verify that of Mozilla's three development browsers, the one the public's testing right now -- Firefox 3.5 Beta 4 -- remains the fastest. We've noted the "Beta 5" designation has recently been removed from the 3.5 Beta 5 "Shiretoko" track, indicating a likely move to Release Candidate status. But while Beta 4 posted a revised index score of 8.49, the latest daily build of the private tests of 3.5 posted a 7.47 -- a score we verified by repeating the circumstances. And the latest daily build of Firefox 3.6 Alpha 1 "Minefield" followed behind at 7.25.

    We'll be following the suddenly resurgent arena of competitive Web browser development on our physical platform from now on. Our machine uses a Gigabyte GA-965P-DS3 motherboard with an Intel 965 chipset, running a 2.40 GHz Intel Core 2 Quad Q6600 processor with 3 GB of DDR2 DRAM. Our display adapter is an Nvidia GeForce 8600 GTS, and our physical Vista platform is running an Nvidia brand driver.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/21/Google_s__30%_faster__Chrome_is_just_the_2.0_beta_released_as_RTM'

    Google's '30% faster' Chrome is just the 2.0 beta released as RTM

    Publié: mai 21, 2009, 11:56pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Up until today, Google had been distinguishing between development tracks 1 and 2 of its Chrome Web browser. Track 1 (last known build version 1.0.154.65) was the company's production edition, though a link on the same page where you could download 1.0 could take you to the "test" version instead, version 2.0.177.1. Google's always had interesting variations on the "beta" theme.

    Anyway, today the company stated on its blog that it's "updating to a faster version" of Chrome, quoting an internal benchmark score giving its JavaScript processing 32.1% better speed in the new version over the old version. Well, that new version -- as Betanews verified today -- is actually 2.0.177.1, which is the same "new version" it's been for a few weeks now. Users of version 2 -- which other services had been distributing as the "most recent release" -- will notice no difference in performance.

    The difference that some users will see is that there's no test version choice anymore; Google's download page takes the user straight to 2.0.177.1 for the first time. Gone are the links to the 1.0 editions, and users with 1.0 builds may (or may not) notice their browsers are being updated as we speak. In fact, in Betanews tests Thursday afternoon, Google's server download speed was nothing anyone would want to shout from the rooftops about.

    "Making the Web faster continues to be our main area of focus," reads a post on the Chrome blog by Chrome engineer Darin Fisher this afternoon. "Thanks to a new version of WebKit and an update to our JavaScript engine, V8, interactive web pages will run even faster. We've also made sure that JavaScript keeps running fast even when you have lots of tabs open. Try opening a bunch of Web applications and then running your favorite benchmark."

    As for anyone who's been confused by the version numbers, Fisher added, "We're referring to this as Chrome 2, but that's mainly a metric to help us keep track of changes internally. We don't give too much weight to version numbers and will continue to roll out useful updates as often as possible."

    Betanews tests (which do not use Google's own V8 benchmark algorithm, preferring to use independently developed or derived tests instead) show the latest build of Google Chrome 2 to be about 16.3% faster than Chrome 1 on an identically configured test virtual system. Prompted by reader requests, Betanews is building a new physical test platform that will enable us to gauge performance under different versions of Windows on the same hardware.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/21/Microsoft_s_move_toward_XML_standards_leads_to__200_million_penalty'

    Microsoft's move toward XML standards leads to $200 million penalty

    Publié: mai 21, 2009, 11:23pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    During the era of Office 2000's dominance in the desktop applications market, Microsoft was frequently criticized for forcing businesses into supporting a document format that was, by design, a moving target. Whenever the company added features to its Office components, support for those features had to be retrofitted onto the document format. That often made archives of thousands of older documents difficult for companies to manage.

    It was a situation which many thought would enable Microsoft to self-perpetuate, creating dependencies from which businesses couldn't escape, forcing them to invest in whatever new versions that came along just to maintain their efficiency. Whether it was attained by accident or design, it was such a prime market position for the company that when it announced in 2005 that it would sacrifice its own Office document formats for an entirely new, publicly viewable, XML-based scheme originally entitled Office Open XML, even Betanews asked the question, "Is It Truly Open?" To this day, even now that Microsoft's efforts led to the publication of an international standard based on OOXML, now called ISO 29500, people are wondering -- often aloud -- where's the string that's attached to this rug that Microsoft will eventually pull?

    In the meantime, a company which was issued a patent in 1998 for the idea of maintaining a document's format in a separate file, has been awarded $200 million to a Toronto-based collaborative software firm, whose engineers claim they had the idea first. The case made by i4i Limited Partnership in its March 2007 suit essentially boiled down to the allegation that the entire move toward XML by Microsoft was a willfully executed strategy against i4i.

    In 1994, just as HTML was first being investigated elsewhere as a vehicle for networked hypertext, i4i Ltd. applied for its US patent. For the time, its concept was novel as any notion of XML would be years away, and the applications for which XML would be used had yet to be envisioned.

    "Electronic documents retain the key idea of binding the structure of the material with its content through the use of formatting information," reads the 1994 patent's background. "The formatting information in this case is in the form of codes inserted into the text stream. This invention addresses the ideas of structure and content in a new light to provide more flexible and efficient document storage and manipulation."

    The engineers mention SGML, the markup language which formed the foundation for HTML. But SGML created problems with regard to formatting, as they went on to write: "While embedding structural information in the content stream is accepted standard practice, it is inefficient and inflexible in a digital age. For manual production of documents the intermingling of the markup codes with the content is still the best way of communicating structure. For electronic storage and manipulation it suffers from a number of shortcomings. Current practice suffers from inflexibility. Documents combining structure and content are inflexible because they tie together structure and content into a single unit which must be modified together. The content is locked into one structure embodied by the embedded codes. Changes to either the structure or the content of the document require a complete new copy of the document."

    This is a problem which the flexible formatting of XML (which is called "eXtensible Markup Language" for good reason) went on to solve. For its part, i4i had much of the same idea, essentially for creating a way to use extensible tags to mean whatever they need to mean in the context of an electronic document. As an example cited by the 1994 patent application, the tag pair <Chapter> and </Chapter> could be used to denote a chapter number in the electronic manuscript of a book, and <Title> and </Title> may offset the book's title. The meanings of those tags with respect to the document at hand could be defined by a separate document, or by many separate documents pertaining to different classes of typesetting machines or displays.

    Did i4i create XML? Not specifically, though it did receive a patent for one of its principal ideas, years before the W3C began to come to the same conclusions. However, despite being what many observers at the time considered late to the game in adopting XML, it is Microsoft that ended up the loser in what some analysts are saying could be among the top five willful patent infringement awards in US history. The company has made clear it will appeal the jury's verdict.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/21/Making_Firefox_extensible_by_you_just_became_simple'

    Making Firefox extensible by you just became simple

    Publié: mai 21, 2009, 6:03pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    When you're a developer with Mozilla Labs or another open source laboratory, one of the things you'll often find yourself doing is "launching" a project before it's anywhere near complete. That's what it means to be truly open. In the case of Aza Raskin and his design team, last night, he "launched" (that's Mozilla's term for it) a project to encourage Web site developers to build simpler but more accessible add-ons for the Firefox browser, by means of a JavaScript API and Firefox plug-in called JetPack.

    Although Firefox is itself an exercise in JavaScript, crafting plug-ins to do simple things is not a simple matter. There's actually a cottage economy developing already around plug-ins, which Jetpack could disrupt merely by giving everyday programmers simpler means to make additions to the browser. "Specifically, Jetpack will be an exploration in using Web technologies to enhance the browser (e.g., HTML, CSS and JavaScript)," wrote Raskin late yesterday in his Call for Participation, "with the goal of allowing anyone who can build a Web site to participate in making the Web a better place to work, communicate and play."

    The surprise is that there's not much to it, and that actually may end up being its biggest benefit. With Jetpack installed in Firefox, the browser becomes instantly adaptable, even on a live basis -- a JavaScript coder can make changes to it without a restart. The language is JavaScript enhanced with jQuery, the transformative language extension that makes the language much more direct, driving events rather than merely reacting to them. The Jetpack API exposes just a few objects pertaining to the event matrix of the browser and some of the front end elements, particularly the status bar. That's where a lot of add-ons' exclusive output will appear, as the Firefox status bar becomes the counterpart of the Windows taskbar or the Mac Desktop dock.

    Mozilla Labs Jetpack - Intro & Tutorial from Aza Raskin on Vimeo.

    There's not much to show for what Jetpack 0.1 can enable a homebrew developer to do with Firefox right now, but the strongest case it's making for itself right now comes from the Gmail notifier demonstration, a little add-on whose total development time could not have consumed longer than an hour. Its principal function queries the Atom feed from Gmail for a string of text, and parses that text to obtain the unread message count. Another function adds the digits for that number to the graphic that appears for the add-on in the status bar. Jetpack utilizes Mozilla and Mozilla Labs features that already exist: for example, it uses the Labs' experimental HTML 5-based inline JavaScript code editor Bespin as its programming front-end and even for a command line that gives immediate orders to Jetpack. And it uses the already popular Firebug as its inline debugger.

    Last night's Call for Participation instructs interested parties as to how to download and install the Jetpack plug-in (not a big deal), how to submit bug reports, and where to find the Labs' very brief instructional videos and tutorials -- which may themselves been produced in an hour or less.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/21/One_more_time___Dublin___.NET_Services__and_the_.NET_4.0_beta_today'

    One more time: 'Dublin,' .NET Services, and the .NET 4.0 beta today

    Publié: mai 21, 2009, 2:08am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Download Microsoft .NET Framework 4.0 Beta 1 from Fileforum now.

    For the fourth time since last September, a Microsoft spokesperson has contacted Betanews to suggest that our explanation of the remote application services deployment model brought closer by today's release of Beta 1 of .NET Framework 4.0 might confuse some folks. Thing is, we at least have reason to believe we understand the concept of it pretty well, having first spent up-front time with it last October at PDC.

    So what exactly is going on? The big idea is that Microsoft is making it possible to design .NET applications that utilize Windows Communication Foundation (WCF, its service-oriented model introduced with Vista) and Windows Workflow Foundation (WF -- just one "W, so as to avoid enraging the World Wildlife Fund), that can be distributed using an applications server. The ideal of service-oriented logic is that an online application defines its services well so that its functions can be discoverable -- that client apps can discern how to utilize or "consume" those services. This way an online service doesn't have to be tailored exclusively for particular clients.

    Meanwhile, WF is a way for applications to better define the way they will work by dividing their functions into discrete jobs. One reason for doing this (other than the fact that it lends to more logical design) is so that the runtime schedulers can determine which of these jobs may be scheduled simultaneously with one another, and which require others to be completed first. Both of these concepts, you'd think, would be crucial for deploying an application remotely through an app server. The truth is, we're only just now getting around to them with regard to Microsoft.

    Now, all technologies ever devised by Microsoft since it first got paid for that traffic light regulator, have been conceived and implemented in components. Sometimes a component that pertains to a broader concept gets a funky code-name. In this case, "Dublin" was created to pertain to the company's Windows Applications Server project, which is a huge expansion of its app server capabilities to enable WCF and WF on a distributed scale.

    .NET Framework 4.0, whose Beta 1 was released to the general public today, contains the necessary expansions for programmers especially using the Visual Studio 2010 Beta 1, also released today, to code apps on the local level that utilize WCF and WF, and that can eventually be deployed remotely. VS 2010 adds the modeling tools a developer needs to get the concepts down, test them out, and deploy them once they're whittled down.

    You've heard us talk about .NET Services. That's Microsoft's implementation of .NET in the cloud -- specifically, it's own cloud, where Windows Azure reigns. One big way to produce a distributed application is to devise it first for .NET 4.0, then to deploy it over Azure where it's managed by .NET Services. Obviously there's a difference between the two components regarding location; .NET Framework 4.0 is on your system, while .NET Services is on Microsoft's. It's generally regarded that Dublin is the underlying technology behind .NET Services. Meanwhile, Dublin technology will eventually be distributed with the Application Server role of Windows Server, hopefully with the WS2K8 R2 release.

    As Microsoft's spokesperson told us this afternoon, "One clarifying point is that while Dublin and .NET Framework technologies are related parts of Microsoft's application server investments, Dublin, .NET Services, and .NET 4 are three separate technologies. Of course, as you might expect, customers will often see value in using them together from a use-scenario perspective, but they are not part of the same beta and will not ship together." Kind of, in the way different layers of an onion are separate parts of a salad, especially once you've peeled it. But they're not really different vegetables. Indeed, you won't be seeing the app server extensions of Dublin with .NET Framework 4.0 Beta 1; and .NET Services, as you now know, is way out there. But they all fit together and rely upon one another, which is why you'll sometimes hear the "Dublin" umbrella extended to include technologies beyond its original purview.

    An excellent article from Microsoft itself on this subject is "WCF and WF Services in the .NET Framework 4.0 and 'Dublin'" by Aaron Skonnard.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/20/Intel_to_compete_head_on_against_Microsoft_in_netbook_OS'

    Intel to compete head-on against Microsoft in netbook OS

    Publié: mai 20, 2009, 5:53pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Intel Atom processor logoYou can't really use the term "Wintel" to refer to computers any more. That fact has never been made clearer than yesterday, when during an Intel conference call with select general press reporters, company officials announced two major moves in the burgeoning arena of very small computers -- netbooks. First, its single-chip platform for netbooks is ready for sampling -- chipset, graphics, and Atom CPU all on one die. Second, its next generation slim form-factor Moblin Linux 2.0 is entering beta.

    While netbook manufacturers currently -- and rather suddenly -- are relying on the venerable Windows XP for as much as 96% of pre-installations, by one analyst's estimate, Moblin's engineers are banking on the possibility that manufacturers are settling for XP because it's the most uniformly adaptable, low-profile system there is for portable media. That said, XP could be too general-purpose in nature for what a netbook wants to be, which is a portable communicative device that isn't a phone.

    For that reason, Moblin tries to look less like a computer operating system and more like a hybrid smartphone/Web browser front end. Rather than a desktop, its home screen is something its developers call the "m_zone," complete with underscore character (when I first heard it, I thought it was a football reference). Here, basic functionality is tucked away along a toolbar that drops down from the top and hides itself when not in use (not unlike Mozilla Fennec). A smartphone-like reminder of events and to-do list items inhabits the left pane, along with a panel for user-installed applications.

    Moblin Linux 2.0 'm-zone,' the system's desktop counterpart.

    But it's the center workspace where all the activity is going on. Perhaps borrowing an idea from Palm's upcoming new Pre, there's "tiles" here that represent goings-on in the user's online social and Web spheres as they happen. If you'll look closely, you'll notice that the leftmost two columns of this area are devoted to Web site related activities and media posts, while the rightmost two columns contain tiles dealing with social network events (like incoming Twitter feeds) and dedicated Web apps like Last.fm.

    Though Moblin is officially a product of the Linux Foundation, its principal source of funding and support is Intel. That might partly explain why Moblin is so hard to test: To do it right, you need a netbook that you're not doing anything else with at the moment. And it needs to be a netbook built on Intel's Atom CPU and graphics chipsets -- anything from AMD, Qualcomm, or Nvidia will not do. The Moblin 2.0 beta is available for download as a "live image," although you may opt instead to use the Foundation's Image Creator tool to build your own live image with your choice of optional features. That tool requires Fedora 9 or later, OpenSUSE 10.3, or Ubuntu 8.10.

    The news of Moblin 2.0 beta going live comes as Intel reveals the first details of its "Pine Trail" chipset, whose select sampling has likely already begun in advance of next month's Computex show in Taiwan. That chipset will integrate memory controller and graphics functionality on the same die as the Atom CPU, for a complete system-on-a-chip that will automatically be best suited for running a Moblin-based handset or netbook. The very concept of integrating memory control with the CPU is among the technologies AMD will likely be contesting as its renewed battle over cross-licensing with Intel begins its next round.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/20/Top_10_Windows_7_Features__4__A_worthwhile_Windows_Explorer'

    Top 10 Windows 7 Features #4: A worthwhile Windows Explorer

    Publié: mai 20, 2009, 12:08am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Special Series banner

    Microsoft Windows 7 story background (200 px)Over the last few decades of Windows' existence, Microsoft has wrestled with the problem of how much control it should give users over the arrangement and organization of files on their computers. In a perfect world, users shouldn't have to care about their \Windows\System32 or \Windows\SysWOW64 directories, so a good file manager shouldn't make the mistake of exposing users to information they don't know how to deal with. On the other hand, knowledgeable users will need to have access to system directories in such a way that they don't have to jump through hoops to find them.

    It is a balancing act, but not an impossible one. Over the years, third-party file management utilities such as Total Commander and xPlorer2 have been among the most popular software downloaded through Betanews Fileforum. Granted, these are typically installed and used by folks who know such bits of trivia as the fact that the \Application Data\Local Settings\Microsoft\Office folder in Windows XP maps to the \AppData\Local\Microsoft\Office folder in Windows Vista. But the reason they're popular with folks such as myself is because we need more direct and comprehensive access to the systems we manage. What's more, we commonly need access to two directories at once, and it makes more visual sense to have them both open.

    Almost dual-pane While the updated Windows Explorer in Windows 7 is not in itself a dual-pane file manager, the surprise is that it does not need to be. With the company's designers having implemented a snap-to feature called Aero Snap -- born out of the company's more intensive experiments with multitouch -- two open Explorer windows can very rapidly become as functional as a side-by-side, dual-pane file manager.

    Maybe not instantaneous, but relatively short-order dual-pane file-copy action in the new Windows Explorer for Windows 7.

    You open the first pane the usual way -- for instance, by selecting Computer from the Start Menu. Then drag that new window by its title bar over to the left side. (By the way, contrary to what we've been told and what we've read, the behavior we're actually seeing in Win7 is that you cannot drag a window by its title bar past the screen border.) By default now, there is a folder tree along the left side of each pane; you don't have to pull it up manually as you did with XP, and you don't have to pull up folders from a hidden frame as you sometimes did with Vista. So you open up a second Explorer window by right-clicking on the destination directory in the folder list, and from the popup, selecting Open in new window. Drag that to the right and release, and the second pane semi-maximizes to fill the space.

    Libraries For most production environments, business documents have a variety of home locations. Though you might think it's nice to have a "My Documents" directory, a single locale for wrapping all manners of media together in one tidy folder, when you manage any kind of enterprise whose business is the production of content, in practice, it becomes tenuous at best, untenable at worst. What's more, the business has its own documents, you have your own personal documents…and then you have your very personal documents. Relying on the Windows file system to keep those locales appropriately segregated and yet associated with one another, is not a productive use of one's time.

    The appearance of a single place for your stuff, as part of the Libraries feature of the new Windows Explorer in Windows 7.Windows 7 (and Windows Server 2008 R2) address this dilemma in a new and, at least from my point of view, hopeful way, through the use of libraries. We introduced you to libraries in our article on Homegroup networking. Essentially, a library is an aggregate collection of all the subfolders belonging to a set of one or more member folders. So you can collect documents (not videos, not pictures, not CD images, but the stuff you can print) together from multiple locations throughout your network or your Homegroup, in a centralized repository called Documents. That way, you can go ahead and keep those documents stored in multiple locations as necessary; the library collects their subfolders together into a single view without coalescing them into a single folder or locale.

    While this changes things for Explorer, it can make things confusing for other programs, especially with regard to saving files. If you'll recall, Windows Explorer also provides the Open, Save, and Save As dialog boxes for applications such as Word and Excel. There are two ways of viewing libraries with respect to other folders, and their differences are esoteric but important: By default, Libraries are stationed along the left pane in a batch by themselves (in these screenshots, you'll see a library I've created called "Mixed Media," which is not one of Windows 7's default libraries). There, Documents looks like a subfolder marked with a piece-of-paper icon, but it isn't one -- it's just a member library. Now, the alternate view mode is accessible from the Tools menu (when you have the menu bar showing); from the Folder Options dialog box, under Navigation pane, it's called Show all folders. While that expands the number of folders you see in the entire list, it also scoots the Libraries group to make it look like a subfolder of Desktop. That doesn't quite make sense to me yet, but maybe I'll adapt.

    Anyway, when you "Save As," in Office 2007 by default, you're still pointed to your default personal documents directory, which in Windows 7 is now called, once again, My Documents (a phrase brought back from the XP days). This is because you need to be able to save files to explicit locations. When you open a file, however, Office 2007 SP2 knows to look at the Documents library first, which makes this aggregation both convenient and smart. Again, how you see the libraries listed in Explorer depends on whether you have Show all folders checked or unchecked.

    The existence of libraries in Windows 7 creates another benefit for keeping one's personal documents and media on separate drives or network locations from your operating system: Should you do a clean and fresh install of Win7, as many will choose to do for perfectly good reasons (we've had some talks about this in the Comments section in recent days), it only takes a few moments to re-enroll your media and other files in Win7 libraries, rather than having to restore them from backups.

    Next: Catching up with your cell phone's view of media…

    Content View in Windows 7 Explorer automatically places album art alongside your MP3 files.Content View If you're a Windows user, there's a good chance your cell phone has a better and more interesting way of organizing your media files than does your current version of Windows Explorer. Windows 7 addresses this little discrepancy with the addition of a new option called Content View.

    For your MP3 file collection, Windows 7 automatically (and this will be a controversial feature for some, I know it) search the Web for album art associated with each file. It will then store album art thumbnails as hidden JPEG files alongside your MP3s in their native directories. That's a lot of album art, quite frankly; and we were surprised to find that Microsoft appears to have gone to great lengths to find even some very obscure album covers, including these from some old movie soundtrack albums from the 33 RPM era.

    Seeing media files this way, however, makes more sense than Tiles view when you're hunting down something, especially a piece of music. A lot of us are more visual than verbal when remembering music; when you think of "The Sounds of Silence," for instance, your mind sees two fellows in black turtleneck sweaters in black-and-white. You can't input that as criteria into Windows Search; yet it might escape you that the name of the album you'd be looking for is "Bookends."

    Simplified sharing In an optimum home networking situation, you would want to avoid having to organize your media files in folders based on what you want to share and what you don't. Almost like setting up a firewall for a business, you'd rather exclude items from being shared with other family members by default, and then include exceptions to that rule at will. But Windows has never made sharing files easy for everyday folks; right-clicking on files and going to the Security tab and referring to security groups and their permissions, is the sort of thing dads don't want to be doing when they've finally located and downloaded the music files needed for their daughters' recitals next week.

    The addition of Homegroup networking has led to the subsequent addition to Windows 7 Explorer of a one-click action Share with. Preferably, its best use is with homegroups (where more than one computer runs Win7), in which case you can take a non-shared directory, choose the files you do want to share, select Share with -> Homegroup, and find them enrolled in everyone's libraries.

    In cases where you don't have a homegroup going just yet, it's still simpler: The same logic that Vista used in making sharing printers simpler on a home network, is applied to finding users to share files with. When you choose files and select Share with -> Specific people, you'll get a dialog box that brings up the names of users (security principals) who share this computer or who have been located in the network's workgroup. Now, right at first, there will probably be problems within networks shared by Win7 and Vista computers, where the latter group hasn't upgraded yet. But we do at least see the possibility of this being rectified in the field over time.

    For most everyday users, Windows Explorer is their homebase of computer operations -- their administrative console. With Vista, we saw some glimpses of hope as it appeared the company was addressing the topic of how real-world users would expect this program to work. But maybe for the first time ever, we're seeing in Windows 7 Explorer some evidence that Microsoft engineers actually looked to other programs for inspiration.

    Download Windows 7 Release Candidate 32-bit from Fileforum now.

    Download Windows 7 Release Candidate 64-bit from Fileforum now.


    FOLLOW THE WINDOWS 7 TOP 10 COUNTDOWN:


    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/19/Linux_Foundation_joins_Microsoft_in_opposing_software_defect_warranties'

    Linux Foundation joins Microsoft in opposing software defect warranties

    Publié: mai 19, 2009, 5:15pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    If someone sells you a defective piece of software, what rights do you have? If the retailer doesn't offer a return policy, as you may very well know -- especially if you ever read the End-User License Agreement, wherever it might be located -- your ability to hold the manufacturer liable may be very limited, if not non-existent. Since the 1990s, Microsoft has been an active opponent of changes to laws and regulations that allow the sale of software to be treated as an exchange of services rather than a sale of goods -- changes that one software development lawyer in 1997 warned would "have a far more damaging effect on software publishing competition and on the quality of software products than anything being done solely by Microsoft today."

    But now, Microsoft's principal competition in the operating system field has joined sides with it in opposing the latest efforts by a panel of prominent judges and attorneys to reform the protocols for developing software sales contracts and warranties. The Linux Foundation is now on record as opposing changes to warranties, and has co-authored a document with Microsoft to that effect, as Microsoft revealed last Sunday.

    Recently, legal scholars, legal authorities, and even governments have lent their talents and resources to the task of strengthening consumers' rights of redress with regard to defective software. Worldwide, the category of software has historically been given specific exclusion from having to provide warranties of merchantability -- guarantees to the customer that it works as advertised, that it won't harm systems on which it's installed, and that it doesn't contain bugs.

    In the United States, the Uniform Commercial Code (UCC) governs the way commercial entities devise their warranties of merchantability. But software as a category has been excluded from falling under the purview of the UCC, on account of successful arguments over the years from Microsoft and many others that software is essentially a service, even though it sometimes comes shrink-wrapped. The American Law Institute is the panel currently responsible for revising and updating the UCC.

    In a joint letter sent by attorneys for the Linux Foundation and Microsoft to the ALI last Thursday, the organizations argued that the current standard already gives the consumer plenty of rights with regard to defective software, and that the exemption granting software the status of a service should not be lifted for that reason.

    "Parties should be able to choose the rules that best suit their needs, as they have the most knowledge about their particular transaction," the organizations argue. "That is not to say that certain protections -- for example, in the business-to-consumer context -- are not warranted. But even in today's common law approach to software contracts, there is no great failure in terms of substandard quality or unmet expectations that would justify imposition of new mandatory rules, particularly given existing remedies under misrepresentation and consumer protection law."

    The organizations go on to state their opposition to the ALI's current re-drafting of the Principles of the Law of Software Contracts, "which establishes an implied warranty of no material hidden defects that is non-disclaimable." After years of enjoying the exclusion of software being treated as a service, Microsoft and the Linux Foundation now argue that leveraging the exclusion to treat software as worthy of specific guidelines, is unfair. "No similar warranty appears in the [UCC]," they continue, "and no explanation is given in the commentary for treating software contracts differently from sales of goods on this point."

    Though year after year, action on redrafting the Principles has been delayed for further reflection (last year at this same time being no exception), the organizations called upon the ALI to delay the redraft again, this time to give "interested parties" an opportunity to comment.

    Next: How Linux's new stance could impact legal efforts in Europe...

    Linux's siding with Microsoft on the issue of software warranties now puts that organization at odds with lawmakers in the European Union. Two weeks ago, EU Commissioners Meglena Kuneva and Viviane Reding jointly proposed what they called a "Digital Agenda for Consumer Rights Tomorrow" -- an eight-point plan for reform of consumer redress rights. Point #4 on that agenda was a cross-continent lifting of the exemption of software's exclusivity treatment in consumer warranties, explained in the agenda statement as: "Extending the principles of consumer protection rules to cover licensing agreements of products like software downloaded for virus protection, games, or other licensed content. Licensing should guarantee consumers the same basic rights as when they purchase a good: the right to get a product that works with fair commercial conditions."

    Lifting software's exclusivity in Europe could have the same effect as extending commercial protections to software exclusively in America -- effectively preventing manufacturers from being able to "flexibly" disclaim their warranties of merchantability. Such warranties, the Linux Foundation argued in its joint letter, would be incompatible with the provisions of open source licensing.

    When news of the Agenda was first issued from Comm. Kuneva's office, the press took it to mean that games -- explicitly mentioned in the Agenda -- would have to not only be non-buggy, but perhaps just good, or else consumers could hold developers liable. That's a bit of an exaggeration, although Kuneva has been on record as supporting consumers' rights to take action when software they purchase does not work as advertised.

    In a statement over the weekend posted to his company's legal blog, Microsoft Deputy General Counsel Horatio Gutierrez -- co-author of the letter to the ALI -- sounded a note of hope that his company's cooperation with Linux on this matter could be a sign of future partnerships to come. "Our industry is diverse and sometimes contentious, but if nothing else unites us it is that we all believe in the power of software," Gutierrez wrote. "I hope that this represents just one of many opportunities to collaborate with the Linux Foundation and others going forward. We have a lot more we can do together."

    And in his own blog post this morning, Linux Foundation Executive Director Jim Zemlin wrote, "The principles outlined by the ALI interfere with the natural operation of open source licenses and commercial licenses as well by creating implied warranties that could result in a tremendous amount of unnecessary litigation, which would undermine the sharing of technology...Today we are finding common ground with Microsoft and we look forward to potential collaboration in the future as well as to competing in the market and keeping each other honest."

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/19/Imagine__a__Firefox_4__without_browser_tabs'

    Imagine, a 'Firefox 4' without browser tabs

    Publié: mai 19, 2009, 1:07am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Insofar as Web applications have become a fact of many everyday users' lives and work, the Web browser has come to fulfill the role of a de facto operating system -- which is why browser performance is a more important topic now than ever before. Now, this most important class of application could be at a turning point in its evolution, a point where history appears to repeat itself once again.

    During the era between Windows 2.0 and 3.1, a minimized window was an icon that resided in the area we now consider the "Desktop;" and even today, many Windows users' Desktops don't perform the same role as the Mac Desktop that catalyzed Windows' creation. Even Windows 7 has tweaked the concept of what a minimized window does and means; and in the Web browser context, a tab represents a similar type of functionality, giving users access to pages that aren't currently displayed.

    Now that users who do business on the Web can open dozens of tabs at once, often among multiple separate windows, rows and rows of tabs are becoming less and less manageable. And while "user experience" designers such as Mozilla Labs' Aza Raskin have been hard at work endowing Firefox tabs with more functionality, as we're seeing in the latest betas of version 3.5, Raskin and many of his colleagues are now very openly pondering the question of whether they're as functional today as it seemed they would be back in 2000.

    "Much of our time on the Web is now spent in Web apps. We use them in long-lived session, and when we close the tabs that house them we know we'll be coming back," Raskin blogged last month. His comments accompanied a sketch of a possible future permutation of Firefox where open tabs are grouped according to category, and stacked along the left side of the browser rather than the top.

    "In a world where we have more tabs than fingers and toes, we need a better way of keeping track of them then just a horizontal strip," he continued. "Group-by-domain seems like a reasonable way to make scanning to find a tab easy. Are there other, better groupings?"

    A few responses came by way of comments, but they weren't exactly complete concepts. To spur some serious development in that direction, Mozilla Labs has announced it's making the redevelopment of browser tabs the subject of this year's Summer Design Challenge.

    Participants are being asked to build their ideas on any medium they have available to them, including the backs of napkins; but to submit those ideas, they need to upload a video to YouTube, Flickr, Vimeo, or other public video site, tagged with the term mozconcept. They then send links to the video to conceptseries@mozilla.com. Deadline for submission is June 21 -- only five weeks away -- and winning entries will be announced on July 8. There's no obligation on Mozilla's part to use any winning ideas at all, though the organization also doesn't appear to stake any exclusive claim to any idea once it's been submitted.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/18/Upgrading_from_XP_to_Windows_7__Does_Microsoft_s_method_work_'

    Upgrading from XP to Windows 7: Does Microsoft's method work?

    Publié: mai 18, 2009, 10:57pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Your initial greeting when starting to run Windows Easy Transfer in Windows XP.Three months ago, Betanews experimented with a process for converting a Windows XP-based system to Windows 7 even though a direct upgrade process was not officially supported by Microsoft. Our process involved borrowing a Windows Vista installation disc, and going through the upgrade motions twice except for the part where you register and activate Vista. This way, you would only have to register Windows 7. Although our tests involved an earlier build of Win7 than the current public release candidate, we discovered the process, while slow and laborious, was at least workable.

    To make certain of this, we installed Office 2007 in our XP-based test system first, then ran Word, Excel, and PowerPoint perfectly well in Windows 7 after the installation was complete. We did have to re-activate Office, but that only took a moment.

    But wait a minute, Microsoft told us, there is a way to migrate from XP to Win7. Really? Well, in a sense. It involves the latest version of what in prior editions was called the Files and Settings Transfer Wizard, and which in Win7 goes by the name Windows Easy Transfer. As the documentation on Microsoft TechNet explains, "To maintain settings when installing Windows 7 on a computer running Windows XP, you must migrate files and settings using a tool such as Windows 7 Easy Transfer and then reinstall your software programs."

    So if you have to reinstall all your old software anyway -- which, technically speaking, creates most of the "settings" which this new Easy Transfer would be migrating anyway -- just what does this wizard actually do? Does it just back up your private documents, or your My Documents folder? And if that's all it does, then wouldn't depositing that folder on a separate hard drive make the migration process easier to manage?

    We decided to try to answer these questions for ourselves rather than cast more rhetorical questions to the wind. What we discovered wasn't exactly hopeful news:

    Your XP installation evidently needs to be squeaky clean. Rather than create a very clean XP installation, we used a duplicate of a very, very well-used XP installation, with Registry settings dating back to practically the Middle Ages. We could not get Easy Transfer to survive the initial scan of the Documents and Settings folder, after numerous attempts, despite our having disabled all anti-malware software, including firewalls. In an attempt to clear the system of any possible conflicts with system drivers, we tried to run Easy Transfer in Safe Mode, only to discover...

    Things look good for a moment with Windows Easy Transfer, just before the whole affair breaks down.You can't run Easy Transfer in Safe Mode. You'd think since this is essentially a system tool, Microsoft would have designed it to run in Safe Mode, which is where many Windows 95 / Win98 veterans performed many system upgrades to XP after all. Perhaps our choice of USB-based storage device was giving the wizard fits -- surely with 100 GB of free space, it shouldn't be a problem. In any event, we tried choosing to network two computers together, bypassing the remote storage option. And that's where we learned...

    If you want to network a second computer, its Windows must have the same bit-width as the one you're transferring. The setup program assumes that any network computer where you want to store the files and settings is the same one where you want them finally installed (we discovered this in the Help file which, granted, is where folks more sane than we are tend to look first). So if you're transferring settings from a 32-bit computer, no, you can't use another computer on your network as a temporary storage device, so a 64-bit Vista is right out.

    We got the distinct impression that the only XP system that Easy Transfer will work with, is one that only has a single hard drive and whose My Documents directory is stored on C:\. As I've advised folks for decades, this isn't the way you should set up your computers anyway -- your personal documents should always be on a separate disk from your system disk, for both performance and safety reasons.

    So for now, we'll continue to advise our readers and friends to continue using the method we prescribed: Borrow a Vista installation disc, upgrade to Vista, then upgrade again to Win7. It's not pretty, nor is it particularly any fun, but it's manageable.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/18/Visual_Studio_2010_Beta_1__.NET_4_Beta_1_for_general_release_Wednesday'

    Visual Studio 2010 Beta 1, .NET 4 Beta 1 for general release Wednesday

    Publié: mai 18, 2009, 5:25pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Breaking News

    A Microsoft spokesperson has confirmed to Betanews that today, May 18, will be the release date for Visual Studio 2010 Beta 1 as well as .NET Framework 4.0 Beta 1, for MSDN subscribers. The general public will get their first shot at both new technologies on Wednesday.

    Though last September's preview edition showed the addition of new tools for application architecture modeling -- moving deep into IBM territory there -- as well as for development team management, it was all being shown under the auspices of the old VS 2008 front end. Soon after the preview edition was released, the company revealed that it was scrapping that more traditional front end in favor of a design based on the Windows Presentation Foundation platform.

    That design made its way to select testers first, who we've learned did not like one of the new design changes: the use of different pointing triangles to denote collapsed and expanded code section blocks. As Visual Studio General Manager Jason Zander blogged last week, the feedback essentially boiled down to, why change a good thing? So the [+] and [-] boxes from VS 2008 have returned in Beta 1.

    The new IDE will also give developers their first chance to build .NET services in the cloud, a huge new addition to that platform. In fact, the addition itself is a platform, up to now code-named "Dublin," and will give a way for developers in local .NET code to leverage just as much cloud-based services as they may require, including deploying their entire applications to cloud-based services such as Windows Azure.

    BETA CAPSULE

    Dublin

    What It Is
    The Windows Applications Server extension project is Microsoft's platform for distributing .NET applications in the cloud.

    How It Works
    Microsoft's objective is to leverage its existing investment in the .NET Framework so that businesses can readily deploy applications, using the tools and resources they already own (including Visual Studio 2010), on a cloud computing platform such as Windows Azure.

    Dublin architecture asks developers to build "event handlers," borrowing a phraseology from another era of Windows programming, except that these events are generated by Web users, not by the end user of a GUI. These events are then handled through "virtual ports" that capture and interpret the events asynchronously, and then respond. While conceivably Dublin could deploy an existing .NET application to the cloud, you'd lose the point. Truly distributed applications respond to events that have been "published," and to which customers "subscribe" -- a signal which the application can recognize and accept. Using tools such as Workflow Foundation (WF), developers can build .NET code that responds to published events through what's called a service bus in Windows Communication Foundation. (This is the technology which Microsoft engineers predicted in 2004 would have already rendered IIS obsolete by now.) The result is an asynchronously behaving component that can be deployed as a component in a distributed composite application.

    What It Means
    It is classic Microsoft to leverage its strengths in one area to build in another. It absolutely differentiates Microsoft's approach to cloud computing from its competitors in that it enables customers to build their own services to be deployed in the cloud, rather than 1) float an image of Windows Server in the cloud and pretend it's on-site; or 2) try to adapt someone else's cloud-based "out-of-the-cloud" application to suit their own purposes explicitly.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/18/AT_T_re_enters_the_data_services_field_by_way_of_the_cloud'

    AT&T re-enters the data services field by way of the cloud

    Publié: mai 18, 2009, 4:53pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    It was literally during the 1960s when engineers first envisioned a realistic concept for remote storage of electronic data. It would be stored and retrieved using a radically redefined telephone network, one which folks might have to wait until 1980 or so to finally witness. And since it required the telephone, the master of the new concept seemed inevitably to be the Bell System -- AT&T.

    The reason it didn't happen that way (the breakup of AT&T aside) was because local storage ended up being relatively cheap, and hard drives made sense. But four decades later, in a vastly different global economy, businesses' appetite for storage space is exceeding the ability of even cheap technologies like hard drives to keep providing it. So businesses are once again investigating a telecommunications-based option, and it is amid that backdrop of historical irony that AT&T is re-entering the picture. This morning, the company announced a programmed, systematic entry into the cloud-based data storage market, choosing a few customers at a time for a new on-demand storage service model it's calling Synaptic Storage as a Service.

    The value proposition is this: Businesses preparing to invest in massive data storage infrastructures for such functions as historical backups of financial transactions, medical records management (a huge new legal requirement for hospitals), and handling duplicates of company webcasts, could be spending millions up front for capacity they may end up not needing. What's more, the service lifetime of that capacity may expire and need replacing long before the company has fully amortized that investment. AT&T's service will cater specifically to businesses that need high capacity, while trimming their costs in accordance with only the capacity they consume.

    It's by no means a new field: Amazon is the trailblazer in cloud services, and IBM has been steering its Tivoli customers in the direction of the cloud since 2007. Meanwhile, startups like Zetta Technologies -- founded by a handful of Netscape veterans -- are already doing a splendid job of making the case for applying the public utility service model to mass data storage.

    A recent Zetta white paper (PDF available here) makes its case more succinctly than even AT&T: "Large-scale managed NAS arrays...are very complex and expensive to operate, and this complexity increases exponentially as the scale of the storage increases. Good storage administrators are a scarce, expensive resource, and storage consumes extensive, costly space, cooling and power in the data center. Storage vendors also play too large a role in dictating both purchase quantity and purchase interval, based on confusing hardware/software bundling and configuration requirements, which may or may not align to your needs. And vendor lock-in is a real concern ??" moving to another storage vendor can involve an expensive forklift upgrade in technology and significant training on a whole new set of processes and tools."

    The ace that AT&T expects to play in order to trump IBM, Amazon, and Zetta in this game comes from EMC, recognized everywhere as the leading network storage provider, though its market share is said to be on a bunny-slope slide toward 25%. Still, EMC has been working on a technology called Atmos -- a networked storage infrastructure on such a massive scale that the only real way to provide it to customers, as EMC itself has said, would be through some well-known third party.

    "One way end-users will utilize cloud computing is to access their applications and information from a third-party provider -- like a large telecommunications company -- that has built a global cloud infrastructure," states a recent EMC online brochure entitled "Cloud Optimized Storage." "That cloud infrastructure will make massive amounts of unstructured information available on the Web, and will require policy to efficiently disperse the information worldwide."

    What EMC means by "policy" is a way for systems on the customer end to utilize rules determining when systems use local storage, locally accessible network storage, and/or cloud storage. As an EMC white paper explains (PDF available here), "EMC Atmos improves operational efficiency by automatically distributing information based on business policy. The user-defined policies dictate how, when, and where the information resides."

    These policies also determine by what means this data may be accessible. Though Atmos provides "legacy" file storage architectures such as CIFS (some may be surprised to consider that "legacy"), it also is capable of mandating that certain storage only be made accessible through Web applications, using protocols such as SOAP and REST. This may require a great deal of customer education as to how policy works in this context and how it should be managed, which may be why AT&T is rolling out its SSS service in what it's calling a "controlled" manner. While the rollout begins this month, the company has not yet revealed any interim milestones for wider availability, and is also not revealing any plans for offering smaller-scale services ("SSS sss?") to the general public.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/15/Top_10_Windows_7_Features__5__Multitouch'

    Top 10 Windows 7 Features #5: Multitouch

    Publié: mai 15, 2009, 5:27pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Special Series banner

    Microsoft Windows 7 story background (200 px)For close to two decades now, the design of applications has changed surprisingly very little. At their core, apps wait for users to generate input, and they respond -- a server/client model of processing on a very local scale. So in a very real way, what applications do has been a function of how they respond -- the whole graphical environment thingie you've read about has really been a sophisticated way to break down signals the user gives into tokens the application can readily process.

    The big roadblock that has suspended the evolution of applications from where they are now, to systems that can respond to such things as voice and language -- sophisticated processes that analyze input before responding to it -- is the token-oriented nature of their current fundamental design. At the core of most typical Windows applications, you'll find a kind of switchboard that's constantly looking for the kinds of simple input signals that it already recognizes -- clicking on this button, pulling down this menu command, clicking on the Exit box -- and forwarding the token for that signal to the appropriate routine or method. Grafting natural-language input onto these typical Windows apps would require a very sophisticated parser whose products would be nothing more than substitutes for the mouse, and probably not very sufficient substitutes at that.

    If we're ever to move toward an analytical model of user input, we have to introduce some sophisticated processes in-between -- we have to start endowing apps with the capability to ask the question, "What does the user mean?" And while this edition of Top 10 isn't really about natural language processing at all, it's about Microsoft's first genuine steps in the direction of evolving applications in the direction they need to go, to be able to absorb natural language as the next big step.

    That first step, perhaps unexpectedly, is multitouch. It's a somewhat simpler version of solving the bigger problem of ascertaining meaning through input, because now that Windows apps will begin being able to process input coming from two or more points on the screen at once, the relationships between those multiple points will need to be analyzed. For example, when a program receives a series of signals that appear to show multiple points, arranged in a generally vertical pattern, moving slowly and then very quickly to the right...could that mean the user wants to "wipe the slate clean?" "Start over?" "Clear screen?" What's the probability of this?

    Analyzing input changes everything, and if enough developers invest their true talents in addressing this necessary element of evolution, this could change everything for Windows apps -- everything.

    Where we are now on the evolutionary scale is not too far ahead of where we started roundabout 1982. The creation of Common User Access (the use of graphical menus and dialog boxes that follow a standard format) led to the development of a kind of "switchboard" model for processing input. And if you follow the model of Windows programming espoused ever since the days of Charles Petzold's first edition of Programming Windows, that switchboard is the part you build first -- if you're a developer, everything you make your program do follows from what you put in its menu.

    Veteran Windows programmers going back to straight C know this switchboard as the WndProc() procedure; and although languages that have crept into IDEs since the Windows/386 days do use different models and conventions, for the most part, they just provide developers with shortcuts to building this basic procedure. Literally, this procedure looks for the unique ID numbers of recognized input signals, called "window messages" or "mouse events," denoted in the nomenclature by the prefix WM_. A long switch clause goes down a list, checking the most recently received event ID against each possible response, one at a time, and connecting with the procedure associated with that response once there's a match. It's like looking for a plug on a telephone switchboard left-to-right, top-to-bottom, every time there's an incoming call.

    Throughout the history of Windows, the evolution of application architecture has typically taken place in either of two ways: Either Microsoft improves some underlying facet of the operating system, which leads to an improvement (or at the very least, an obvious change) in how the user perceives her work immediately; or Microsoft implements a change which developers have to seize upon later in order for users to see the benefits down the road. As you'll certainly recall, the change to the security of the system kernel in Windows Vista was immediate, and its benefits and detriments were felt directly. But when Microsoft began introducing Windows Presentation Foundation (WPF) during the twilight of Windows XP's lifecycle, it took time for new developers to transition away from Foundation Classes (MFC), and many still choose not to.

    Multitouch is one of those changes that falls into the latter category. Since December when the company published this white paper, Microsoft has been calling its latest layer of user input Windows Touch. Windows 7 will have it, and presumably at this point, Vista can be retrofitted with it.

    Windows Touch is an expansion of WPF to incorporate the ability to do something the company's engineers call coalescing -- ascertaining whether multiple inputs have a collective meaning. Using two fingers to stretch or shrink a photograph is one example, or at least, it can be: In an application where two fingers may be used for a variety of purposes -- twirling, changing perspective, maybe even minimizing the workspace -- an act of coalescing would involve the application's ability to register the multiple inputs, and then ascertain what the user's intent must be based on the geometry that WPF has returned.

    The concept of coalescing was introduced to us last October by Reed Townsend, the company's lead multitouch engineer. As he told attendees at the PDC 2008 conference that month, the revised Win32 Application Programming Interface (which will probably still be called that long after the transition to 64-bit is complete) will contain a new "mouse event" called WM_GESTURE, and it will handle part of the job of coalescing motions, in order to return input messages to an application that go just beyond what the mouse pointer can do for itself. Rotation, zooming, and panning, for instance, may require a bit more decision making on the part of Windows Touch -- a kind of on-the-spot forensics that filters out panning motions from scrolling motions, for instance.

    Microsoft multitouch program manager Reed Townsend demonstrates his company's expandable globe simulator, in a demo from PDC 2008.Since the first editions of the Tablet SDK were produced during XP's lifecycle, Microsoft has been building up a vocabulary of coalesced gestures that developers may be able to utilize through the assembly of gesture messages. Perhaps a down stroke followed by a right stroke may have some unique meaning to an app; and perhaps the length of that right stroke may impart a deeper meaning.

    Simply endowing programs with the capability to judge what the user means based on the input she appears to be giving, changes the architecture of those programs, particularly in Windows. Born from the company's experiments with Surface, a simple manipulable globe application has provoked Microsoft's engineers to think very differently about how input is processed.

    Video: Windows 7 Touch - Globe Application

    The idea, as we saw last October, is simple enough: Place a globe on the screen which the user can twist and turn, but then zoom in to see detail. The project became more detailed once the engineers began to appreciate the complexity of the problems they were tackling. Stretching and shrinking a photograph is a simple enough concept, particularly because a photo is a) two-dimensional, and b) rectangular. But a three-dimensional sphere adds new dimensions, as users are likely to want to use the device spherically. "Down" and "up" become curved, sweeping motions; and zooming in on an area becomes a very complex exercise in transformation. Add two-dimensional floating menus to the equation, and suddenly the application has to analyze whether a sweeping motion means, "Get that menu out of my way," "Pin that menu to that building," or "Make that menu smaller for me." And when a big 3D object such as the Seattle Space Needle comes into view, when the user touches its general vicinity, does he mean to be touching the building itself? Or the globe on which the building is positioned?

    In order to make input simpler -- to have a moving globe that folks will already know how to move and zoom into and look closely at and do things automatically with -- the application has to incorporate orders of magnitude more complexity than they've ever worked with before. The moment the program "guesses wrong," and the globe behaves in a way users didn't expect, they'll feel disconnected with it, as though an object in their real world were to wink out of existence or morph into some foreign substance.

    Next: The immediate upside of multitouch...

    The immediate upside of multitouch What the engineers are learning from this process is already having a positive benefit to Windows 7 in general, in features you'll see and use every day. The concept is being called Aero Snap, but it doesn't actually require the Aero rendering model at all -- it works fine in a virtual machine without Aero. After 19 years of the current window model, Microsoft discovered something that could have helped them as far back as 1984, during the days of the "MS-DOS Executive:" Perhaps a simpler way to maximize a window would be to just drag its title bar to the top of the screen.

    Now, while that concept sounds simple enough, what if all the user's trying to do is drag an existing window up? If he hits the top of the screen, its size could blow up unexpectedly -- and once again, Windows feels like a foreign substance. The user experience (UX) engineers tackled that problem by creating a way to "feel" when a window dragged up there can be minimized. If the left button is held down while the pointer touches the edge of the screen and stays there, Windows 7 responds with a visual signal that highlights the entire workspace, advising the user that the window would be maximized if he let go of the button. (This effect does look cooler in an Aero environment, where the system applies a glossy finish; in normal rendering, the system merely covers the workspace in a blue halftone.) If the pointer continues beyond the upper edge, the window is merely dragged on up, just like before.

    The 'Aero Snap' feature at work in Windows 7, where the user makes a half-maximized window by dragging its title bar to the left or right edge.

    Taking this idea a few steps forward, the Win7 user can now drag a window to the left edge to have it "semi-maximized" -- to fill just the left half of the screen. The visual cue remains, so if the pointer moves to the edge and stops, he's given a clear warning with the visual effect. But then the user can do the same with the right edge as well, providing at long last, after two decades of complaints from me and Jerry Pournelle and the rest of the world, a way to create a dual-paned Explorer environment in about a second. And having learned lessons from the Surface and Touch projects, Win7's UX engineers remembered that doing the opposite motion should provide the opposite response; so once snapped windows are dragged away from their edges, they return to their previous size and shape.

    This is the beginning of something, a lesson finally learned after years of plodding in the same general direction with the same general result. The introduction of styluses and now fingertips into the input model has finally led to a scenario where apps may at last be able to process natural language syntax -- not just single words like "maximize," but sophisticated concepts like, "Show me what that might look like in five years." Simply adding the input analysis phase as a step in the process of computing will -- if we do this right -- truly revolutionize the way we work. Of course, if we follow the same patterns we've followed up to now, the whole idea could also begin and end with Snap.

    Download Windows 7 Release Candidate 32-bit from Fileforum now.

    Download Windows 7 Release Candidate 64-bit from Fileforum now.


    FOLLOW THE WINDOWS 7 TOP 10 COUNTDOWN:


    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/14/Apple_s_Safari_4_Beta_for_Windows_speeds_up_after_security_update'

    Apple's Safari 4 Beta for Windows speeds up after security update

    Publié: mai 14, 2009, 7:18pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Test Results

    Earlier this week, Apple posted security updates for both its production and experimental versions of its Safari browser, for both Mac and Windows platforms. But Betanews tests indicate that the company may have sneaked in a few performance improvements as well, as the experimental browser posted its best index score yet: above 15 times better performance than Internet Explorer 7 in the same system.

    After some security updates to Windows Vista, Betanews performed a fresh round of browser performance tests on the latest production and experimental builds. That made our test virtual platform (see page 2 for some notes about our methodology) a little faster overall, and while many browsers appeared to benefit including Firefox 3.5 Beta 4, the very latest Mozilla experimental browsers in the post-3.5 Beta 4 tracks clearly did not. For the first time, we're including the latest production build of Apple Safari 3 in our tests (version 3.2.3, also patched this week) as well as Opera 9.64. Safari 4, however, posted better times than even our test system's general acceleration would allow on its own.

    In our first tests of the Safari 4 public beta against the most recent edition of Google Chrome 2 last month, we noted Safari scored about 10% better than Chrome 2, that company's experimental build. Since that time, there's been a few shakedowns of Chrome 2, and a few security updates to Vista.

    Those Vista speed improvements of about 4% overall were reflected in our latest IE8 and Firefox 3.5 Beta 4 test scores. After applying the latest updates, we noticed IE8 performance improve immediately, by almost 13% over last month to 2.47 -- meaning, about 247% better performance than Internet Explorer 7 on the same platform. While Google Chrome posted better numbers this month over last, neither version may have benefitted from the Vista speed boost very much, with Chrome 1 jumping 2.4% over last month to 11.9, but Chrome 2 faring better, improving almost 6% to 13.84.

    Relative test scores of Windows-based Web browsers, conducted May 14, 2009.

    Safari 4's speed gains were closer to 8% over last month, with a record index score of 15.5 in our latest test. This while the latest developmental builds of Firefox 3.5 not-yet-public Beta 5 ("Shiretoko" track) and 3.6 Alpha 1 ("Minefield" track) were both noticeably slower than even 3.5 Beta 4. This was a head scratcher, so we repeated the test four times, refreshing the circumstances each time (that's why the report I'd planned for yesterday ended up being posted today), with our results confirmed each time.

    We started fresh with Opera, this time testing both the production and preview builds for the first time. Opera 9.64 put in a score very comparable to Safari 3.2.3, at 5.94 versus 5.64, respectively. But our latest download of the Opera 10 preview kicked performance up more than a notch, with a nice 15.4% improvement over last month to 6.21.

    Next: A word on methodology...

    A word on methodology I've gotten a number of comments and concerns regarding the way our recent series of browser performance tests are conducted, many of which are very valid and even important. For the majority of these tests, I use a Virtual PC 2007 VM with Windows Vista Ultimate. Most notably, I've received questions regarding why I use virtual machines in timed tests, especially given their track record of variable performance in their own right.

    The key reason I began using VMs was so that I could maintain a kind of white-box environment for applications being tested with an operating system. In such an environment, there are no anti-malware or anti-virus or firewall apps to slow the system down or to place another variable on applications' performance. I can always keep a clean, almost blank environment as a backup should I ever install something that compromises the Registry or makes relative performance harder to judge.

    That said, even though using VMs gives me that convenience, there is a tradeoff: One has to make certain that any new tests are being conducted in a host environment that is as unimpeded and functional as for previous tests. With our last article on this subject, I received a good comment from a reader who administers VMs who said from personal experience that their performance is unreliable. With my rather simplified VM environment (nothing close to a virtual data center), I can report the following about my performance observations: If the Windows XP SP3 host hasn't been running other VMs or isn't experiencing difficulties or overloaded apps, then Virtual PC 2007 will typically run VMs with astoundingly even performance characteristics. The way I make sure of this is by running a test I've already conducted on a prior day again (for instance, with Firefox 3.5 Beta 4). If the results end up only changing the final index score by a few hundredths of a point, then I'm okay with going ahead with testing new browser builds.

    I should also point out that running one browser and even exiting it often leaves other browsers slower, in both virtual and physical environments. This is especially noticeable with Apple Safari 3 and 4; even after exiting it on any computer I've used, including real ones, Firefox and IE8 are both very noticeably slower, as is everything else in Windows. My tests show Firefox 3 and 3.5 Beta 4 JavaScript performance can be slowed down by around 300%. For that reason, after conducting a Safari test, the VM must be rebooted before trying any other browser.

    When VM performance does change on my system, it either changes drastically or little at all. If the index score on a retrial isn't off by a few hundredths of a point, then it will be off by as much as three points. It's never in-between. In that case, I shut down the VM, I reboot XP, and I start over with another retrial. Every time I've done this without exception (and we're getting well into the triple-digits now), the retrial goes back to that few-hundredths-of-a-point shift.

    Therefore I can faithfully say I stand behind the results I've reported here in Betanews, which after all are only about browsers' relative performance with respect to one another. If I were testing them in a faster system or on a bare bones physical machine, I'd expect the relative index numbers to be the same. Think of it like geometry: No matter how big a triangle may be, the angles measure up the same, they add up to 180, and its sides have the same proportionate length.

    Now, all that having been said, for reasons of reader fidelity alone, there's benefits to be gained from testing on a physical level. You need to trust the numbers I'm giving you, and if I can do more to facilitate that, I should. For that reason, I will be moving our browser tests very soon to a new physical platform (I've already ordered the parts for it). At that point, I plan to restart the indexes from scratch with fresh numbers.

    Next, there's been another important question to address concerning one of the tests I chose for our suite of four: It's the HowToCreate.uk rendering benchmark, which also tests load times. On that particular benchmark, Safari and Chrome both put in amazing scores. But long-time Safari users have reported that there may be an unfair reason for that: Safari, they say, fires the onLoad JavaScript event at the wrong time. I've encountered such problems many times before, especially during the late 1980s and early '90s when testing basic compilers whose form redraw events had the very same issues.

    This time, the creators of the very test we chose called the issue into question, so we decided to take the matter seriously. HowToCreate.uk's engineers developed a little patch to their page which they said forces the onLoad event to fire sooner, at least more in accordance with other browsers. I applied that patch and noticed a big difference: While Safari 4 still appeared faster than its competition at loading pages, it wasn't ten times as fast, but more like three times. And that's a significant difference -- enough for me to adjust the test itself to reflect that difference. For fairness, I applied the adjusted test to all the other browsers, and noticed a slighter difference in Google Chrome 1 and 2, but a negligible difference in Firefox and IE. Our current round of index numbers reflects this adjustment.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/14/New_royalties_for_radio_clears_first_congressional_hurdle'

    New royalties for radio clears first congressional hurdle

    Publié: mai 14, 2009, 1:42am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    If Congress were to pull the trigger, eliminating language from US Code dating back to the 1920s stating that terrestrial radio stations don't have to pay royalties to play music whose performers they promote, the resulting shock wave could impact the Internet music industry, and digital music publishers in general. With some radio broadcasters reducing or even eliminating their air time -- one such threatened repercussion -- Internet radio alternatives like Last.fm and Pandora could pick up more listeners. But with possible new performers' royalty rates that could result, with terrestrial radio serving as a gauge for what all broadcasters should pay, those Internet stations could end up paying more for absorbing those new listeners.

    That outcome is by no means certain, but one of the few likelihoods in the whole radio royalties debate came to fruition today, as the latest version of the Performance Rights Act passed the House Judiciary Committee by a vote of 21 - 9. That committee is chaired by John Conyers, Jr. (D - Mich.), who is the bill's principal sponsor, and whose realignment of judiciary subcommittees following last November's elections certified that his committee would be the one marking up the bill, and not a subcommittee chaired by Rep. Rick Boucher (D - Va.).

    The original language of the bill mainly consisted of striking the exemption language in Title 17 of the US Code. But in the bill's evolution, it's gained some interesting amendments including one that mandates that copyright holders -- the ones receiving royalties from radio already -- pay 1% of their receipts as royalties to a performers' rights fund managed by the American Federation of Musicians.

    So it's no surprise that the AFM came out in support of today's vote, with its president, Thomas F. Lee, stating today, "This legislation will close the loophole in the copyright law and end the free pass that terrestrial radio has enjoyed to play music without paying the royalties that all other music platforms -- including satellite, cable and Internet radio stations -- pay artists, musicians and rights holders for the use of their recordings."

    But there may be a silver cloud lurking behind that dark lining: As the National Association of Broadcasters reminded the press this afternoon, a resolution opposing any such striking of the exemption language has gained the signed support of 44% of the House. That doesn't spell smooth sailing ahead for the Conyers bill. Recently, observers of Congress noted that Conyers' language might stand a chance of passing if it were attached to some other legislation, as what's typically called a rider. That's the way, in 2006, Sen. Majority Leader Bill Frist (R - Tenn.) famously got his language banning monetary transactions related to online gambling signed into law by the President -- by tying it to a shipping ports anti-terrorism protection bill that Mr. Bush couldn't help but sign.

    With the AFM fund creation language hanging on Conyers' bill now, however, transferring it to a rider on some other pressing legislation becomes harder. As the bill later comes up on the House floor, where a terrific debate is now certain, it has to pass or fail on its own merits. If it passes, radio stations could find themselves owing a percentage of their revenue to performers' rights organizations for the first time, perhaps borrowing a formula newly applied to Internet radio stations. If the bill fails, then it's more likely that a decision reached last January by the Copyright Royalties Board to base Internet radio broadcasters' royalties on revenue and not numbers of performances, would continue to be recognized by Congress.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/13/Intel_CEO__The_exclusivity_and_loyalty_of_OEMs_are_up_for_bids'

    Intel CEO: The exclusivity and loyalty of OEMs are up for bids

    Publié: mai 13, 2009, 10:26pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    This morning's ruling by the European Commission essentially finding Intel guilty of illegally tying rebates to exclusivity agreements, and other practices, is said to be a 500+ page document that as of now remains under seal. But the EC's characterization of its ruling today paints a picture of a dominant market manipulator that made exclusivity deals with at least five major global PC producers and Germany's largest PC retailer, making them offers they couldn't refuse that kept AMD from competing on an equal playing field.

    "Not all rebates are a competition problem -- often they will lead to lower prices for consumers in the long term as well as the short," stated EC Commissioner for Competition Neelie Kroes in a press conference this morning in Brussels. "But the Intel rebates in this case were a problem because of the conditions that Intel attached to its rebates. Moreover, the Commission has examined closely whether an efficient competitor could have matched these rebates. These conditions, to buy less of AMD's products or to not buy them at all, prevented AMD from competing with Intel on the merits of its products. This removed the possibility of genuine choice for consumers and undermined innovation."

    But in his response to reporters on behalf of his company early this afternoon, East Coast time, Intel CEO Paul Otellini suggested that the EC investigators essentially saw what they wanted to see, and by overlooking key facts, ignored the basic principles by which the computer product market operates in Europe. Without saying so in explicit language, and without naming any of Intel's own customers, Otellini adeptly countered that the business environment that leads to exclusivity deals was set up by those customers. Specifically, he did say that manufacturers auction off their sales, and the winner of those auctions effectively cancels out the loser, achieving exclusivity by default.

    Intel CEO Paul Otellini"On occasion, we'll sell a combination of microprocessors and other chips at a price which is more favorable than if people bought the products independently," Otellini told reporters. "They have the right to buy the products independently, they have the right to mix-and-match products. But we believe that, relative to our CPU chipset pricing, there's no harm nor foul there. On the rebates, it's a simple volume discount kind of environment. We bid for business under a number of conditions, some of which are trying to meet competition. Our customers put business up for bids, we bid on it, AMD bids on it, and when you have a market which is principally supplied by two players, when one company wins, the other, by definition, will lose the business. And so I think this is really just a matter of competition at work, which is something I think we all want to see, versus something nefarious."

    Otellini later went on to describe customers -- presumably including Acer, Dell, HP, Lenovo (the PC market successor to IBM), and NEC -- as being particularly savvy market players who set up the conditions for the market in which AMD and Intel both play. By implication, he said that if any of these customers end up buying 80% or more of their CPUs from Intel, it's because it wanted to, and because that much of its business was up for bids.

    Notice, the CEO commented, that none of the manufacturers named by the European Commission this morning were actually complainants in the case. One reporter asked, could that be because they were afraid to comment -- afraid to lose Intel's business? Otellini dismissed the whole idea as ridiculous, saying, "As to our customers, it's absurd to think that we would not sell product to someone who happened to not like a particular comment or term or whatever. This is a very competitive business, our customers are in most cases larger than Intel. Our customers have incredible buying power, and are excellent negotiators. So on the face of it, your scenario is absurd."

    Otellini said he had not yet seen the 500+ page ruling handed down by the EC, so he was baffled at how it could have reached an amount for the fine of €1.06 billion. If that's supposed to represent harm to the consumer, he said, he can't figure out how that aligns. "It's hard to imagine how consumers were harmed in an industry which has lowered the cost of computing by a factor of 100 during the term of this case," he said. "And at the same time that happened, AMD claims that it's more vibrant than ever. So I don't see either evidence of consumer harm or competitor harm happening here."

    Soon after Otellini's press conference, AMD Vice President for Advanced Marketing Pat Moorhead spoke with Betanews. We asked Moorhead, how would the European CPU market have been different had Intel not engaged in this conduct?

    Pat Moorhead, Vice President for Advanced Marketing, AMD Corp."I believe that there would be an impact on innovation, I believe there could be an impact on price, and also an impact on people's choice," Moorhead told Betanews. But is there a realistic dollar value that AMD or anyone has been able to pin to this impact, we continued -- knowing that AMD's civil suit against Intel in a Delaware court is ongoing, and that AMD seeks damages there. "I actually think the EU put a dollar figure on it the best they can. Does that represent the exact damages to the consumer? I don't know," he responded. "But if you think about the fact that there's a 50% price delta between us and Intel, if you look at the fact that they've just been convicted of using monopolistic power, using bribery and coercive measures, to block us out, I think it's pretty safe to say that its impact on prices could have been pretty amazing.

    "I think it would be very different, in terms of the amount of innovation, the amount of money that AMD could have reinvested into its R&D," continued AMD's Moorhead. "Now, OEMs set the pricing for the end systems, but if you look at the 50% differential between our products and Intel's microprocessors, it's not too difficult for me to assume that prices would be different as well. And there has been a lot of innovation, and I think there is no artificial barrier of how far innovation can go. If we can all agree that, without competition, there isn't innovation and there are higher prices, you put that monopolistic practice in there, and it's not going to go as fast as it would have gone."

    Next: How much change will the PC market actually see?

    How much change will the PC market actually see?

    We presented Moorhead and AMD Communications Director John Taylor with Otellini's statement of just minutes earlier, painting the CPU market as being driven by manufacturers whose loyalties have price tags that dominant players, by the EC's definition, are better able to satiate than others. While both declined to knock down the Intel CEO's illustrations entirely, Moorhead asserted there's a big difference between volume pricing -- which Otellini said was the only dynamic in play during the relevant period of the EC's investigation -- and exclusionary pricing, which gives customers huge price breaks in exchange for exclusivity. However it was that these deals were entered into, he told us, the EC determined that they were in fact made, and that Intel is in fact responsible.

    "Who knows how far prices would have fallen had Intel played fairly?" Moorhead asked rhetorically. He went on to cite the count of the EC ruling where HP declined a majority of AMD's offer of free CPUs, in order not to interfere with its existing deal with Intel. "You can't get cheaper than free," he said. At one point during today's Intel press conference, a reporter asked Otellini whether he felt consumers would be less inclined to purchase Intel-based products. "It's hard to imagine that the dynamics of competition would change," he responded. "Most customers buy from both suppliers today. Most customers buy more or less from each supplier depending on the quality of the products, the competitiveness of the products, and the pricing. That dynamic hasn't changed in my career at Intel, which is 35 years, and I don't expect it to change. I don't think a customer is going to put him- or herself at a disadvantage by buying inferior or more costly products just to try to walk lines that maybe are artificial."

    AMD's Pat Moorhead, though, believes that Intel is now permanently marked. Like an ex-convict, it now has to check in with EU authorities periodically to have its behavior monitored. And that stain may extend to its business deals in the US and elsewhere, he said: "If someone steals from his neighbor, it still makes that person a thief, even though he didn't steal from your house."

    From this point, AMD's John Taylor believes that customer and even press perception of AMD will be fairer and more even-handed. "It's not AMD that has to change," he repeated a few times, especially after we cited Otellini more times than he might have liked. "There is [now] a virtuous cycle of fair competition and innovation. [In its absence,] it doesn't matter how hard AMD innovates; the judges will score half-a-point for every blow AMD lands and two points for every blow Intel lands. [Intel's behavior] shuts down that virtuous cycle, stands on AMD's windpipe, and caps that reward for innovation that AMD would receive, that could have been poured back into R&D."

    Expecting the opposite opinion from Otellini, one reporter asked him today whether he expected PC prices could rise as a result of the ruling, partly from Intel passing on the costs of the fine (payable within 90 days of the ruling) to its customers. Repeating his notion that the fine is not officially a "cost" per se that can be transferred to consumer prices, the CEO responded, "I think they'll absolutely see a difference in the price of PCs. Certainly...prices will continue to go down. Quality goes up, performance goes up. There's nothing in this ruling that reverses Moore's law."

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/13/EU_fines_Intel__1.4B__says_it_paid_OEMs__retailer_to_exclude_AMD_products'

    EU fines Intel $1.4B, says it paid OEMs, retailer to exclude AMD products

    Publié: mai 13, 2009, 5:21pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    For years, the evidence against Intel with regard to its business conduct in Europe has been treated as allegation, especially by anyone in the press with any serious intent of showing fairness. As of today, at least in Europe, it's no longer an allegation: Intel cheated, says the European Commission this morning, in a decision that can best be described as the worst-case scenario for Intel coming to fruition.

    This morning, the EC found that for a 62-month period beginning in October 2002, Intel paid German retailer MediaMarkt, which operates stores primarily in Germany and Russia (not an EU member), to sell Intel-based computers exclusively in its retail outlets. This based on evidence turned up during a February 2008 raid of Intel's German offices.

    And in instances that were already at the heart of AMD's civil antitrust lawsuits against Intel in the US and abroad, the Commission ruled that Intel made rebates throughout the same period to five major computer manufacturers, based on conditions that are illegal under European law. It's not the rebates themselves that are illegal, the EC made clear, but rather the fact that they were tied to the recipients' keeping their promise to limit their shipments of AMD-based products in key categories from as little as 20% of their overall sales, to as little as zero.

    In deference to those recipient companies, the EC this morning left out their identities with regard to specific charges. But it did reveal their names collectively, and it's the names we'd expected since AMD's 2005 civil suit against Intel began: Acer, Dell, HP, Lenovo, and NEC. In explaining the events of Intel's misconduct, the EC referred to these companies in random order, calling them computer makers "A," "B," "C," "D," and "E."

    If we piece together some of the news that happened during the relevant period, their identities may perhaps be easily sorted out. For example, according to the EC's statement this morning, "Intel made payments to computer manufacturer E provided that this manufacturer postponed the launch of an AMD-based notebook from September 2003 to January 2004." Taiwanese industry daily DigiTimes had covered the market extensively during this period. Its reporting (excerpted here) showed that Acer delayed the introduction of AMD Mobile Athlon 64-based notebooks past the period the CPUs were introduced -- September 2003 -- until after the holidays.

    In a statement this morning from EC Commissioner for Competition Neelie Kroes, she points out that it's the behavioral stipulations that made Intel's rebates illegal: "Not all rebates are a competition problem -- often they will lead to lower prices for consumers in the long term as well as the short," stated Comm. Kroes. "But the Intel rebates in this case were a problem because of the conditions that Intel attached to its rebates. Moreover, the Commission has examined closely whether an efficient competitor could have matched these rebates. These conditions, to buy less of AMD's products or to not buy them at all, prevented AMD from competing with Intel on the merits of its products. This removed the possibility of genuine choice for consumers and undermined innovation."

    Citing a case that AMD itself brings up frequently, the EC this morning mentioned how Intel offered "one computer manufacturer" millions of free CPUs only to have that offer turned down. Unfortunately, that particular manufacturer wasn't mentioned by letter, because due to the high publicity surrounding the US civil antitrust suit, we know that manufacturer to be HP.

    Leading to the Commission's decision to fine Intel €1.06 billion, according to Kroes, was evidence she said pointed to Intel attempting to cover up for its conduct.

    "The Commission Decision contains evidence that Intel went to great lengths to cover-up many of its anti-competitive actions. Many of the conditions mentioned above were not to be found in Intel's official contracts," she stated. "However, the Commission was able to gather a broad range of evidence demonstrating Intel's illegal conduct through statements from companies, on-site inspections, and formal requests for information."

    Further comments from Intel officials are expected later this morning, and Betanews will follow up shortly afterward.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/12/Windows_7_gives_Firefox_3__IE8_speed_boosts__while_Firefox_3.5_slows_down'

    Windows 7 gives Firefox 3, IE8 speed boosts, while Firefox 3.5 slows down

    Publié: mai 12, 2009, 11:18pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Test Results

    In preliminary Betanews tests Tuesday comparing the relative speeds of major Web browsers in Windows Vista- and Windows 7-based virtual machines, not only did the general performance of Microsoft Internet Explorer 8 improve by about 23%, but the latest production build of Firefox 3.0.10 appears to improve its performance by 17.5%. This despite running in a Windows 7-based virtual machine that we estimate to be 12.1% slower overall than a Vista-based VM hosted by the same environment.

    These are the initial findings of Betanews' experiments in how the architecture of Windows 7 may or may not influence the performance of major Web browsers. We wanted to see whether Win7 made browsers faster or slower, and doing that meant hosting browsers in virtual environments whose relative speeds with respect to one another could be normalized.

    As we discovered, Windows 7 RC Build 7100 runs perceptibly slower on a Virtual PC 2007 platform on XP SP3, than Vista SP2. This does not mean Windows 7 is a slower operating system, but rather that it behaves more slowly in this particular virtualized environment, which after all was designed for Vista. So to make our test fair, we needed to estimate just how much slower our Win7 environment was from Vista, and factor out that difference.

    Up to now, we've been comparing relative browser performance in Vista using a relatively slow browser to judge against: IE7. We've used IE7 as our gauge of how much more readily other browsers blow right past it in the performance department, including IE8. But we don't want to install IE7 on Win7 -- although it's technically feasible, doing so would pollute the operating system for running Win8 and other applications. So we needed a new, slow browser that we could rely upon to stand still for us, relatively speaking.

    Our first choice was Firefox 1.5, but we learned it had difficulty running in Win7 at all. We ended up using Firefox 2.0.13, not quite the final build of that series of Mozilla's browser. Our aim was to use this browser as a fair gauge of how much slower our Win7 environment was than Vista. This way, we could equalize our indexes, which are based on IE7 -- we can't run IE7 on Win7, but we can estimate how much slower IE7 would be if we could, by measuring how much slower Firefox 2.0.13 is. Though the average speed difference is 12.1% in favor of the Vista VM, for our browser benchmarks, we created differentials for each heat in the competition, to more accurately account for environmental factors between the two environments.

    In the Vista VM alone, Firefox 2.0.13 puts in a performance index of 2.49, meaning it performs 249% as well as IE7 in the same environment. Compare that to Firefox 3.0.10's index score of 5.19 in recent Betanews tests in the Vista VM.

    Factoring out the speed differentials, we can reliably say that IE8 gives us a performance index of 2.69 in the Win7 VM versus 2.19 in the Vista VM. Meanwhile, Firefox 3.0.10 scores a 6.10 normalized index score in the Win7 VM versus 5.19 in the Vista VM.

    The news is not all good for Mozilla, however. Under the same test conditions, Firefox 3.5 Beta 4 slows down in Win7, but only by about 2.5%, scoring a 10.18 normalized index score in the Win7 VM versus 10.44 in the Vista VM. So from this angle, it appears that Windows 7 helps close the gap between Mozilla's production browser and its experimental browser. We're interested to find out whether similar discoveries await us with regard to Google Chrome, and whether Win7 will play nicely with Apple's Safari for Windows. Those results are still forthcoming.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/12/The_Web_without_the_browser__Mozilla_s_Prism_enables_true_Web_apps'

    The Web without the browser: Mozilla's Prism enables true Web apps

    Publié: mai 12, 2009, 5:58pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Download Mozilla Prism for Windows 1.0 Beta 1 from Fileforum now.

    Mozilla Labs has been devoted to building ideas into viable code that may or may not become products someday. For a year and a half, one of its tasks has been to build a framework for deploying Web-based applications straight to the desktop, while introducing though not necessarily mandating a new methodology or set of practices for sites to follow. In other words, if an application is already live in a browser like Firefox, let's take it out of the browser motif and move it to the desktop.

    Since much of Mozilla is about the fine art of testing, the Prism project last week was able to officially exit its internal testing phase, and enter...a new phase of testing, this time what Mozilla calls the Beta 1.0 phase. So despite the fact that Prism 0.9 is now history, Prism 1.0 is now in public beta, with its developers openly inviting users everywhere to install existing Web apps as though they were just plain apps.

    Last year, when Mozilla staff "phenomenologist" Mike Beltzner introduced us to Prism, he told Betanews, "What Prism does...is allow you, when you get to any one of these applications on the Web, to just click a button and say, 'I want to make this an application on my desktop.' You'll get an icon on your desktop, and you'll be able to interact with it through Alt-Tab like anything else, but it will actually just be this Web site. Now, there's a little way to go with Web technologies. You need offline support, you need to be able to use that application when you're connected or when you're not connected. So...we've built in support for a new HTML standard for offline applications."

    Using the Prism Firefox plug-in to enroll an existing Web application (Zoho Writer in this case) as a stand-alone app.Prism comes as a two-part set. Technically, you only need the Prism "runtime" (the Firefox browser with a little less fox and more fire), though in Betanews tests on a Windows XP SP3-based machine, we had difficulty getting the stand-alone Prism to enroll a Web application as a stand-alone app. We had much better luck with the Firefox plug-in, which lets you use Firefox to browse to the application you want to enroll, then from the Tools menu, select Convert Website to Application. A very simple dialog box gives you the only options you need for effectively bringing up an instance of Prism like a browser and loading your Web app as though it were its home page. When we checked Start Menu, Prism created a shortcut there, but conveniently in the Web Apps folder rather than in the usual mess of first-tier apps.

    For now, we're noticing one curious problem: We can't run more than one Prism instance simultaneously. The multitasking problem will need to be solved if we intend to enter the Web application world intact.

    Zoho Writer 2.0 is vastly improved over its predecessor, as we discovered last month. But one of its remaining dilemmas with the standard Firefox browser is that it reacts poorly with some plug-ins -- in many of our test systems, JavaScript won't size the user's choice of fonts correctly. Moving the Web app to Prism solves this problem, because it gets rid of all the excess baggage that surrounds every browser window (it's our fault for installing it all in Firefox in the first place, I suppose). Now, although you can start your Web app like an installed application, Windows itself doesn't see it that way yet; in our tests, it still groups the browser as a taskbar entry under Firefox, along with any other instances of Firefox you might have open. So the transition to the new metaphor isn't complete just yet, though we're delighted to see Web applications elevated even this much to the level of "real software," not by building up the Web browser but by stripping it down. By itself, Prism is merely 8 MB of code, making it an inoffensive and unobtrusive runtime platform -- as well as, in many ways, a development path for the future development of the Web itself.

    Zoho Writer running as a stand-alone app in Prism 1.0 beta.

    Download Mozilla Prism for Linux 1.0 Beta 1 from Fileforum now.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/12/Russinovich_rescues_the_TechEd_2009_keynote_with_Windows_7_AppLocker_demo'

    Russinovich rescues the TechEd 2009 keynote with Windows 7 AppLocker demo

    Publié: mai 12, 2009, 4:01am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Mark Russinovich demo at TechEd 2009.In the absence of many dramatically new product announcements (notices about the Office 2010 technical preview and Windows Mobile 6.5 were already expected), it was Senior Vice President Bill Veghte's job for the first time to rally the troops during this morning's TechEd 2009 keynote address in Los Angeles. But perhaps not everyone has Bill Gates' knack for holding an audience captive with sweeping gerunds and participles, or Ray Ozzie's outstanding ability to conjure a metaphor as though it were a hologram hovering in space, and describe it for countless minutes without relating it to the physical universe.

    What may have kept attendees affixed to their seats for the time being was the promise of Mark Russinovich, Microsoft's Technical Fellow who always dives right into a real-world demonstration in the first few minutes, and is always affable enough to be forgiven for the inevitable technical glitch. Though Russinovich's stage time today was shorter than usual, one of his highlights was a demonstration of a feature Windows 7 RC downloaders had already received but may not have known they had: a way using group policy to block specified software from running on client systems even after it's been upgraded or revised.

    It's Windows 7's new AppLocker feature, which he calls "SRP [software restriction policy on steroids." Think of it as a firewall but at the kernel level: When enabled in a network environment, by default, AppLocker disables any application from running that isn't recognized as part of Windows. That, by itself, isn't something anyone would want; so using group policy or using Local Security Policy at the client level (yet another reason why the Windows client should not disable group policy management) a user or admin can program exceptions to this default rule. Those exceptions can monitor the operating system for metadata pertaining to running applications, enabling selected software to run even after it's been upgraded.

    While application disablement has existed in Windows Vista, the problem it's had up to now is that whenever programs change, the rules for disablement have to change with them. Network administrators use these fairly strict rules as means of prohibiting employees from installing just any old software they find, or from downloading media that triggers the download and installation of something very much unwanted.

    During Russinovich's demonstration, he launched one of his own line-of-business apps called Stock Viewer that, under the default rule, failed the execution test after a revision. He used that failure as leverage for launching a new wizard in Win7 that lets the admin quickly create a new allowance rule to mitigate future failures.

    Windows 7's new Create Executable Rules Wizard, which enables admins to prevent unwanted executions at beneath the firewall level.

    While SRP in Vista limited group policy rules to filename and file hash (a hash signature based on the unaltered binary contents of the executable file), Windows 7's new rule class, called "Publisher," lets the admin tailor the rule to account for a wide or narrow scope of metadata. In this particular figure, we used IEXPLORE.EXE (Internet Explorer 8 in Win7) as a template for entering fully qualified publisher metadata into a rule. From there, the wizard cleverly uses the slider control to dial up or down the level of control the admin needs for the rule, with down representing deeper control.

    Microsoft Technical Fellow Mark Russinovich at TechEd 2009.As Russinovich described, "The slider over here on the left lets you dial up or down the specificity of your rule. For example, if I trusted everything from SysInternals [his own company, acquired by Microsoft] -- which you should, obviously -- then you'd want to set this slider to here [Publisher]. But if I slide it all the way down to here [File Version], I'm creating a rule that says that only Stock Viewer is allowed to run, and only versions 1.0 or higher. So I've really controlled exactly which application from this publisher is allowed to run, but I've still made it flexible because if version 2 comes out, I don't have to go revisit this rule. It's just going to magically work."

    Group policies in modern Windows can be modeled on one computer and then applied to multiple clients in a network. Alternatively, for a less Draconian approach, you can set up AppLocker to allow everything to run except those applications you specify; and there, you can use Publisher class rules to use metadata to help you craft exceptions. But that's not always helpful. For example, with the template you see in the figure above, for instance, we can set up a rule prohibiting anyone from using Internet Explorer older than version 8, by effectively enabling version 8 and higher to run; what gets prohibited are the versions you omit.

    Microsoft has published a quick demonstration video of AppLocker at work, downloadable from this address.

    Mounting a virtual hard disk (VHD) file from the Management Console in Windows 7.AppLocker wasn't the only demonstration garnering enthusiasm this morning; later during his time, Mark Russinovich demonstrated the first effective use of PowerShell version 2 to generate scripts for applying group policy objects. And later he received some rousing applause for the revelation that Windows 7 can mount and even use virtual hard disk (VHD) files -- the kind usually reserved for Microsoft-brand virtual machines. This way a user can have access to a VHD's contents without invoking the actual virtual host that created it. This also enables new possibilities for VHDs' portability between devices. For example, Windows 7 and Windows Server 2008 R2 can now both be set up to boot from a VHD, regardless of where it's located -- on portable storage, maybe over a network, maybe in the cloud.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/08/No_clear_decision_on_Microsoft_.NET_Micro_Framework_s_new_business_status'

    No clear decision on Microsoft .NET Micro Framework's new business status

    Publié: mai 8, 2009, 5:59pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Granted, Microsoft's not accustomed to scaling back operations as drastically as it has had to this year, so it's understandable when a company gets the first-time jitters. But as of this morning, not even the people who direct the development of .NET Micro Framework -- Microsoft's innovative development platform for small devices -- can give a definitive answer with regard to what's happening to the project, shedding only selective rays of light on already fuzzy explanations.

    On Wednesday, ZDNet blogger Mary Jo Foley was first with a story saying Microsoft had made the decision to release the .NET MF project to "the community," though the company left the true definition of that term to the rest of the world to ponder. Foley's original source for her story -- as is typical for the veteran journalist -- was Microsoft itself, whose spokesperson had told her and others in the press, "Microsoft also intends to give customers and the community access to the source code," She also quoted portions of the statement saying the business model for .NET MF was changing to "the community model."

    That sounded like a scaling back to Foley; but other portions of the spokesperson's statement were unclear as to what was going on. As she now reports, she was asked to correct her characterization, although she sticks by her story.

    Others who had gotten a hold of the announcement reported that Microsoft was phasing out .NET MF, and that turning it over to "the community model" was merely a euphemistic way of saying it was dumping the product. Meanwhile, however, the man in charge of the unit responsible for the product found himself in the no-longer-unusual position of correcting absolutely everyone on the story, including and especially the company's spokespersons.

    "First, the product is moving into the Developer Division (Server and Tools)," wrote Unit Manager Colin Miller yesterday. "This is a great fit for the technology and we are really looking forward to it. The move means that we will be fully aligned with the rest of the .NET groups and tools in building the uniform programming model from the sensors to servers. The announcement that we are moving to some form of community direction and development including code access is accurate. We will investigate how to do that in the near term so stay tuned. For now however, the current products are available and continue to be supported as before."

    That may mean Microsoft has the intention to discontinue charging royalties to companies whose devices were programmed using .NET MF, and that include parts of Microsoft's code in their ROMs. That much was implied by the spokesperson on Wednesday, although the statement was interpreted to mean it was already happening.

    Developer Manager Lorenzo Tessiore followed up later yesterday with this notice: "We are currently in the process of framing the rules of engagement and we hope to be able to offer both a process for a regulated development effort and a broad license, so that it will be possible to take advantage of the code base without necessarily contributing to the community. The details and the rules of this engagement will be defined over the near term with your involvement as well." That phraseology only talks about not owing royalties to developers, but does not directly imply that royalties won't be owed to Microsoft.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/08/The_legacy_of_Khan__Star_Trek_s_first_collision_course_with_the_mainstream'

    The legacy of Khan: Star Trek's first collision course with the mainstream

    Publié: mai 8, 2009, 12:28am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    My best friend Jeff saved me a copy, because he knew I'd not only want to see it but dissect it, the way a hungry crow goes after a freshly slammed armadillo in the middle of I-35. I was a Star Trek fan the way a New Yorker is a fan of John McEnroe or an Oklahoman is a fan of the Dallas Cowboys, loving to see them in the spotlight but always critiquing their style. Jeff was the assistant manager of a movie theater with four (four!) screens, so he got the advance promotional kit for Star Trek II: The Wrath of Khan, and Jeff saved me the first promo poster with stills from the movie. Between the premiere and my high school graduation, the premiere was -- at least at the time -- the more exciting event.

    The Wrath of KhanI can't think of Star Trek movies today without picturing the gang of us seated around the linoleum tables at Big Ed's, chomping down a heap of fresh-cut fries and taking apart the pictures from the promotional kit for clues. What was the meaning, for example, of Uhura's and Chekov's sweater collars being blue-gray, while Sulu's and Scotty's were mustard yellow?

    If you're wondering, why bother with such trivia when we knew the answer (if there was one) was something arbitrary, consider this: Not only is Star Trek the first, and perhaps the only, franchise born of television to become elevated to the realm of global folklore, but it was never really born of the mind of one person. Gene Roddenberry was not to Star Trek what C. S. Forester was to Horatio Hornblower; Trek was and still is, in many respects, a public playground built on intellectual property seeded by Roddenberry but left to flourish. So unlike a franchise devised largely or solely by one person -- for instance, J. K. Rowling's Harry Potter -- there's a certain accountability to the public that Star Trek has, that's shared by no other collective work of fiction. It's the closest thing you'll ever see to open source that will ever emerge from a Hollywood studio.

    The true Star Trek fan is someone who has an unusual personal investment in the story. So when someone dares to take the story further, the fans are folks who have an interest in the outcome, and who want to hold the writers and producers to their bargain. And for folks like us whose junior high and high school lives were, for most days, peculiarly unworthy of ever being chronicled in hardcover or on the big screen, Star Trek was not so much an escape as an endeavor in survival. It was the hope of something bolder, more worthy of our time and effort, than the institutionalized hopelessness that kept us chained to our seats while our teachers were on break, or wherever they had ventured off to.

    It was also the ticket to everything bigger. My very first exercise in learning to program a computer was debugging an 80-column line-printer-driven Star Trek game that incorrectly scored the number of Klingons the player had killed. By the time The Wrath of Khan was first being advertised, most of my friends and I were collaborating on the production of a bigger and better Star Trek game that used artificial intelligence techniques I'd been learning. It was the beginning of the open source movement (which also meant we couldn't make a dime from our work without owing Paramount). I dove into college-level trigonometry long before graduation, in an effort to build a combat model for three-dimensional space.

    As a young kid, Star Trek helped me feel not so alone, at a period of time when I was absolutely alone when a boy shouldn't be. So imagine my surprise when Trek II was first coming out. It wasn't playing at Jeff's theater -- it was two miles south, at the Quail Twin Cinema, theater #2. Number two was huge, it had the best sound system, it had two aisles, could hold a few thousand seats, and was adorned in the most other-worldly blue curtains. Had the screen fallen down, you might be able to play football on the thing.

    The line for Trek II that June afternoon extended behind the theater, around the perimeter of the parking lot, into the lot of the nearby tire store, out along May Avenue, down two blocks, and into a residential neighborhood along 112th Street. A police cruiser was stationed along the street to keep order, remembering that when Star Wars premiered just a few years earlier, there were protesters ("God is the only force!"). There were enough teenagers there to fill my high school twice over.

    My friends had staked out a prime position for us all, enabling me to take a walk down the admission line. There I'd find what seemed like several hundred of the same folks who used to make fun of my Star Trek fandom, who had been guilty of the unspeakable sin of confusing Trek with Wars ("Hey, Scott, you been on the Enterprise with Obi-Wan lately?"). I marched alongside them like Capt. Kirk reviewing the troops, and many of them knew exactly what I was thinking and how I felt. It was my victory march. It meant more than any diploma I had ever received.

    Once inside Quail #2, Trek II was -- and is to this day -- the most fun I have ever spent in a movie theater without a date. There were thousands of us, guys like me and folks unlike me at all, openly speculating on the meaning of Uhura's and Chekov's blue-grey sweater collars. Rarely did fifteen seconds pass during the film without some audible form of mass audience reaction, including openly cheering for the stars names in the opening credits (the whistles for Kirstie Alley could have shattered glass). Guys who spat in my general direction during my cold, grey junior-high years were snarling at Khan, wincing when he put the scorpion-thing in Chekov's ear, hissing when he channeled Captain Ahab from Moby Dick. When Sulu's phaser shot took out the Reliant's torpedo launcher, the theater erupted in a deafening roar that made it impossible to hear the dialogue for minutes thereafter.

    And several of them even shook my hand after it was done, or nodded in my general direction, or gave me some acknowledgement that we as a people had grown up at last, that we were all in the same league, and that letting your mind play on the public playground of science fiction was a real blast. It was our graduation day, in a very real sense.

    In the US, folks will be seeing the latest Star Trek movie for the first time -- the 11th in the series -- tonight. And if you're wondering why it's more important than your average movie for many of us, it's because that's our story they're messing with, and they'd better mess with it right.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/07/Top_10_Windows_7_Features__7___Play_To__streaming_media__courtesy_of_DLNA'

    Top 10 Windows 7 Features #7: 'Play To' streaming media, courtesy of DLNA

    Publié: mai 7, 2009, 10:20pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Special Series banner

    Microsoft Windows 7 story background (200 px)Perhaps you've noticed this already: Getting media to play in a Windows-based network is a lot like siphoning water from a pond using a hose running uphill. If you can get enough suction, enough momentum going, you can get a decent stream, but there are way too many factors working against you. Foremost among these is the fact that you're at the top of the hill sucking through a hose, rather than at the bottom pushing with a pump.

    So home media networking is, at least for most users today, precisely nothing like broadcasting whatsoever. That fact doesn't sit well with very small networked devices like PMPs, digital photo frames, and the new and burgeoning field of portable Wi-Fi radios like Roku's SoundBridge. Devices like these don't want or even need to be "Windows devices;" and what's more, they don't want to be the ones negotiating their way through the network, begging for media to be streamed uphill in their general direction. They want to be plugged in, shown the loot, and told, "Go." Back in 2004, a group of networked device manufacturers -- the Digital Living Network Alliance (DLNA, and yes, it's another network association) -- coalesced with the idea of promoting a single standard for being told "Go." But up until today, there hasn't been a singular, driving force uniting the standards together, something to look up to and follow the way Web developers followed Internet Explorer.

    That changes with the advent of Windows 7. Starting now, Microsoft will be utilizing a way to push its media upstream, if you will, by means of a cohesive "Go" concept around capturing and commandeering media devices called Play To. The DLNA introduced the concept last December, knowing and being driven by Microsoft's intention to use Play To in Win7.

    The idea begins simply enough: DLNA-capable wireless and wired devices may be all over your house. Rather than set them up using their own controls, wherever they may be, a router or access point in the home should be able to push the setup information they need to enroll themselves in the network. From there, those devices (which may include a picture frame, an MP3 player, a Zune, an Xbox 360, or conceivably another computer) may serve as destinations for media being pushed from a Windows 7 machine, either through Media Player 12 or Windows Media Center.

    Add to this formula the notion that the media that Windows pushes to a DLNA device may come from another DLNA device, such as a Windows Home Server machine or perhaps a DV-R device from Toshiba or Sony (though it would be nice if Slingbox and TiVo were on this list).

    And keep in mind here once again, there doesn't need to be a Media Center Extender in this operation (apparently Microsoft's making them anyway, though in an optimum setup, you wouldn't need them). The DLNA-compliant access point has handled the problem of identifying the media playing device and enrolling it in the network, so the device doesn't have to be a slave to whatever Windows machine is sending it a stream.

    Get Microsoft Silverlight

    In a demonstration video released last December on Microsoft's Channel 10 (see above, Silverlight required), company developer Gabe Frost demonstrated how the DLNA standard would play into a Windows 7-endowed home media network. The new "Play To" command in Win7 enables a user to push content from a PC to a device gathered into the collective DLNA pool. There's no need for shared directories or creating named network shares, which would normally serve as points of location -- the type of tool you'd need if you were trying to hunt down the location of the streaming device from the playing device. "Play To" pushes the content to where you want it to go.

    Next: How DLNA may make your HDTV into a Windows 7 "receiver"...

    Download Windows 7 Release Candidate 32-bit from Fileforum now.

    Download Windows 7 Release Candidate 64-bit from Fileforum now.

    What the new DLNA scheme in Windows 7 enables, goes a little deeper and further than just the addition of the "Play To" command. Since network attached storage devices are now DLNA compliant (never mind the kerfuffle, to borrow an Angela Gunn term, over the variable degree of compliance there is in the market today), a Media Player- or Media Center-enabled PC can pull content from a NAS device and push it to another destination. Since more HDTVs are entering the DLNA scheme, that destination may be the family room TV, even without connecting that TV to the Windows PC as its monitor. And to do this, the PC doesn't even have to have a big display -- it can be the family laptop.

    This gets bigger still: Another networking alliance which is itself allied with the DLNA is the Multimedia-over-Coax Alliance (MoCA). Its mission is to evangelize home networkers on the idea of using the sideband of the coaxial cable already wired throughout their house, as a backbone for a wired (and thus certainly more secure) network. The upshot of such a scheme would be that modern HDTVs could become network-ready devices without the user having to add any other cabling to them beyond what the cable guys have already installed.

    A diagram showing how MoCA's coaxial cabling could link a home network.

    So the "Play To" command in Windows 7 could theoretically, by means of DLNA, push anything that can be played in Windows Media Player 12 on the big HDTV in the living room. The consumer would need to purchase one device to make this happen, however: It's called a MoCA bridge, and we saw the first such devices from D-Link premiere at CES back in January of 2008. It's a simple device that has an Ethernet input and a coaxial output, where the coaxial is simply the same loop that runs through the house already, delivering CATV or satellite signals. Why haven't MoCA bridges taken off yet? Maybe because there's no one certified way to make them work yet...and that will likely change with Windows 7.

    As with any so-called "networking alliance" -- which regular Betanews readers will know to be a favorite oxymoron of ours -- so many disparate factors and dissonant voices must still come together for the dream of making any Windows PC a home broadcaster to become a reality. As of now, reports are piling up of home networks that use third-party media software that recognizes DLNA devices (there's been a cottage industry there since at least 2007) that fails to connect networked storage devices with DLNA-certified, registered output devices. And in that latter category, Sony's PlayStation 3 is one of the more prominent...and notorious. When issues such as these arise, home users end up having to learn more about their computers, about Ethernet, about Wi-Fi, and about the Federal Communications Commission than they'd ever have to know if they had just stuck with something like, say, Nero MediaHome 4 and an Xbox 360.

    What should change that state of affairs is the active participation of Microsoft, and the prominence of the "Play To" command. One of the unsung heroes of Windows Vista to date has been its Media Center, which often works so flawlessly and so simply that it's amazing to acknowledge its manufacturer. If the same degree of knowhow that went into Media Center for Vista is applied to DLNA networking for Windows 7, there will very likely be more households applauding this part of the operating system than critiquing it. And after Vista, the promise of shifting the balance in the direction of praise should be incentive enough to get Microsoft to finally, for once, "Go."

    Download Windows 7 Release Candidate 32-bit from Fileforum now.

    Download Windows 7 Release Candidate 64-bit from Fileforum now.


    FOLLOW THE WINDOWS 7 TOP 10 COUNTDOWN:


    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/07/Even_more_fusion_after_the_latest_AMD_reorganization'

    Even more fusion after the latest AMD reorganization

    Publié: mai 7, 2009, 5:37pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    In the business press and in business marketing, the term "merger" is often used quite loosely, sometimes to mean the incorporation of another company as a division of the acquirer. When AMD acquired ATI in July 2006, the merger was touted as a pairing of equals, and the forging of a permanent fraternity between two giants in their respective fields. But almost immediately afterward, talk of ways to build processors that used AMD cores and ATI pipelines together led to discussion about truly fusing the two divisions' business units; and the first sign of the fallout from that discussion was former ATI CEO Dave Orton's departure from the AMD executive ranks in July 2007.

    Almost precisely one year ago, the actual fusion of the two divisions began, with the creation of a Central Engineering group that would conduct research and development for all the company's processors. Freescale Semiconductor veteran Chekib Akrout was brought in to lead that department, but in a partnership arrangement with AMD veteran Jeff VerHeul. Yesterday afternoon, AMD announced the remainder of its fusion is complete: As AMD spokesperson Drew Prairie explained to Betanews this morning, there is now one marketing department and one product management department as well, while some of the functionality of Akrout's department is being shifted.

    "Chekib will be in charge of long-term technology development -- IP cores, CPU cores, GPU cores, and putting the building blocks in place," Prairie told us. Meanwhile, "Rick Bergman will lead a unified product development and management organization."

    In the AMD vernacular which dates back a few decades, a "core" is a complete component of intellectual property -- it's the thing a company builds upon in order to create a platform. Since Akrout's ascension at AMD's research arm, he's been actively in charge of the "Stars" core, which is currently at the heart of its Phenom processor line. The first culmination of AMD's technology acquisition from ATI, the "Accelerated Processing Unit" (APU), is part of a platform code-named "Shrike" which builds on the Stars core.

    Just last week, in a planned Q&A released by AMD, Akrout was asked to describe his job at what was still being called Central Engineering: "At the high level, it's managing the decision-making around what technology we will be using and developing at AMD. That includes longer term R&D considerations, as well as new directions and specific innovations we'll be incorporating into the product line. On top of that, I'm responsible for managing all of AMD's IP development ??" core processors for all market segments and associated IP such as analog, I/Os, accelerators and memory."

    Until today, VerHeul was paired with Akrout in the Central Engineering department in building a technology plan around the Stars core. After today, however, VerHeul's duties are shifted to the product management department, led by Rick Bergman. The idea there is to "product-ize" the company's fusion-oriented technologies, although as Prairie explained, that won't mean an acceleration of AMD's current roadmap.

    "There are no changes to our roadmaps one way or the other," he told Betanews. "The goal of the organization is to keep the pedal to the metal with discrete CPUs and GPUs, but give more opportunities for fusion." For example, Bergman will now be working with VerHeul in the development of already-planned platforms that build on Phenom II, the latest incarnation of the Stars core.

    After Akrout was brought on board last year, quarter-century AMD veteran Randy Allen was moved from his former homebase in servers and workstations to oversee more consumer-oriented products. That move seemed a bit ominous at the time, and now Allen has decided to leave the company. Prairie declined to discuss what may have led to Allen's decision, though he confirmed that decision was his and not the company's.

    From an outside perspective, Prairie said, there really won't be that much of a change at AMD, and the org chart doesn't change all that much. VerHeul had been working hand-in-hand with Akrout in moving Stars core products; now he'll be working with Bergman to move Phenom products. "The new charter of Rick's organization is soup-to-nuts," he told us, "to get the products out the door." In that respect, he added, the characterization by the business press this morning of ATI and AMD actually merging yesterday is "a lazy word choice."

    Inside those doors, however, the change is much bigger -- that one organization that's being shifted between departments is essentially the heart of AMD right now. One immediate change that takes place as a result is that VerHeul's focus won't just be on fusion products, but instead "to take advantage of any and all the platform opportunities we have," said Prairie.

    But does this conclusive zipping up of the last paired business units to be working in tandem mean that the final piece of evidence of ATI's existence -- its brand name -- will also be folded in? While AMD's Prairie characterized that event as highly unlikely, he would not close the door on his employer's behalf, telling Betanews: "I wouldn't rush to any implications as to what the branding implications may or may not be."

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/07/EU_Parliament_approves_law_ensuring_Internet_access_as_a_fundamental_right'

    EU Parliament approves law ensuring Internet access as a fundamental right

    Publié: mai 7, 2009, 1:05am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    For years, the European Commission has been planning a comprehensive package of telecommunications reform, with the aim of creating a "bill of rights" spelling out what individual European citizens should have a right to do online, and what kind of business environment they should expect. For instance, consumers should have the right to change their carriers while keeping their old phone numbers, reads paragraph 1 of the Telecoms Reform bill; and in paragraph 3, when a member state imposes a measure that a telecom business believes threatens free competition, it may raise the issue before a higher, continental authority that may trump national lawmakers.

    But it's paragraph 10 that's been the cause of considerable debate. After the EC submitted the reform bill to the European Parliament (the lower house of the EU's legislative branch) it amended that paragraph with stronger language about the rights of a European citizen to Internet access -- language that attempts to quite literally equate the right of access to the right of free speech. Last November, the text of that amendment looked like this: "No restriction may be imposed on the fundamental rights and freedoms of end-users, without a prior ruling by the judicial authorities, notably in accordance with Article 11 of the Charter of Fundamental Rights of the European Union on freedom of expression and information, save when public security is threatened where the ruling may be subsequent."

    That language met with opposition from Parliament members who supported French President Nicolas Sarkozy, whose "three strikes" bill against alleged IP pirates was re-introduced in his country's parliament last week, after suffering a defeat there just one week earlier.

    At the same time Pres. Sarkozy's bill was being resurrected, EU Parliament members came to agreement on compromise language about access assurance. At issue there was how best to phrase the part about "the judicial authorities" -- essentially, how to determine who gets to take that fundamental right away, and for what reasons. According to press reports, part of the debate revolved around whether a hardening of the language should be tucked in paragraph 10 itself, or perhaps in the preamble -- in an area of the bill that could be treated like the "small print" accompanying a pharmaceutical company's promise of instant relief.

    "The rules therefore provide that any measures taken regarding access to or use of services and applications through electronic communications networks must respect the fundamental rights and freedoms of citizens, including in relation to privacy, freedom of expression and access to information and education, as well as due process," reads a dispatch from the EC this morning. "The new rules also clarify that the final word on this important matter of Internet access must be with a judicial authority."

    But even with the support of both houses of the EU legislature, the bill cannot become law without the approval of the Council of Telecoms Ministers, which represent the collective regulatory power of the member states with respect to telecommunications. The Council has already voiced its opinion on telecoms matters in recent months; last month, for instance, it suggested watering down the part about the creation of an oversight authority with the power to nullify measures such as Sarkozy's, replacing it instead with a kind of grievance forum with only the power to render "opinions." The Council has appeared worried that the bill's reference to "a judicial authority" leaves open the door for bestowing arbitration power in personal rights matters to a tribunal, making it a kind of appeals court for individual citizens who may feel infringed upon by Sarkozy and leaders of other member states.

    In her trademark style, European Commissioner for the Information Society and Media Viviane Reding effectively laid down the law for the Telecoms Ministers, saying that a vote against telecoms reform for this reason could be construed by their respective constituents as no less than a vote against fundamental human rights.

    "Now the ball is in the court of the Council of Telecoms Ministers to decide whether or not to accept this package of reforms," stated Comm. Reding this morning. "This amendment is an important restatement of the fundamental rights of EU citizens. For many, it is of very high symbolic and political value. I call on the Council of Ministers to assess the situation very carefully, also in the light of the importance of the telecoms reform for the sector and for the recovery of our European economy. The Telecoms Council on 12 June should be used for a political discussion on whether agreement on the package is still possible or whether the discussion will have to start again with the new European Parliament in autumn."

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/06/Google_Chrome_grows_up__joining_the_realm_of_everyday_exploitability'

    Google Chrome grows up, joining the realm of everyday exploitability

    Publié: mai 6, 2009, 10:00pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    When the first public beta of Google Chrome arrived on the scene last September, it was given a rather rude welcome: It immediately faced the problem of averting a vulnerability. But this was only by virtue of the fact that it uses the open source WebKit rendering engine, whose exploitability had been discovered in Apple Safari just a few weeks earlier.

    Now, however, Chrome is coming unto its own, but in a good way: Developers discovered some serious vulnerabilities in the browser apparently before malicious users did. In perhaps the most potentially serious dodged bullet, one of the Chromium project's lead contributors discovered a buffer overflow condition that occurs when a bitmap is copied between two locations in memory. The pointers to those locations may point to different-sized areas without any type or size checking, theoretically enabling unchecked code to be copied into protected memory and then potentially executed without privilege.

    It's a typical buffer overflow situation. But in this case, the Google team was able to investigate and validate the claim, resolve the situation, and issue a test build for quality assurance testing within a mere five days. Still, the QA phase required another eight days before build 154.64 was released, and all during that time, the fact of Chrome's vulnerability was out in the open in Chromium's developers' forum.

    While this isn't the first security-related issue to have affected Chrome since last September, it is probably the most critical. There is no indication that an active exploit of this issue was ever tried or is in the field.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/06/First_Windows_7_RC_patch_turns_off__hang_time__correction_in_IE8'

    First Windows 7 RC patch turns off 'hang time' correction in IE8

    Publié: mai 6, 2009, 4:53pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Perhaps Google Chrome's most innovative architectural feature is the way it relegates Web page tabs to individual processes, so that a crash takes down just the tab and not the whole browser. In addressing the need for a similar feature without overhauling their entire browser infrastructure, the engineers of Microsoft Internet Explorer 8 added a simple timeout mechanism that gives users a way to close a tab that appears unresponsive.

    As it turns out, there's quite a few legitimate reasons why a Web page might appear unresponsive although it's really doing its job. One of them concerns debugging with Visual Studio, as this user of StackOverflow.com discovered.

    When a tab goes dead in IE8, not only is a message sent to the user giving her a way to dismiss either the tab or the message, but another message is sent to Microsoft as well -- and that's one of its indicators of how well IE8 is performing. In a blog post Monday, IE8 engineers reported an uptick in the number of hung tab reports it received from users of the Windows 7 Release Candidate, one day before the general public got its turn.

    "Based on the initial, Microsoft-internal, data after putting this in the product, we thought the experience was unobtrusive and overall better for users because it provides more information to improve the product," the team wrote. "As more data has started to come in from external Win7 users, we've seen an increase in reports. We're watching the data very closely to understand how well this works for the larger set of users. If we see data that makes us think this is not a good experience, then we'll release an update to address it."

    Just hours after the RC's public release (keep in mind how long it takes general users to download and install the system), Microsoft acted by first official Win7 RC update: a patch that turns off the hanging tab reporting feature, along with an alternative method for folks familiar with managing the System Registry. "On low performance computers or on computers under high load conditions, the time-out value is frequently exceeded. Therefore, you frequently receive the error message...described," reads Microsoft's Knowledgebase explanation, using language that does not sound like it passed through the marketing division first.

    Serious Win7 RC testers may want to consider not applying this update, especially as later updates and possible performance enhancements may be handed down, unless these error messages happen so frequently that using IE8 becomes impossible. (Note to the "Experience" team: Consider putting a variable countdown on your timeouts next time.) Registry veterans can resolve the issue (conveniently leaving open a way to un-resolve it for later testing) by creating the Registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN!HangResistance as a DWORD value set to 0.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/05/Top_10_Windows_7_Features__8__Automated_third_party_troubleshooting'

    Top 10 Windows 7 Features #8: Automated third-party troubleshooting

    Publié: mai 5, 2009, 11:14pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Special Series banner

    Microsoft Windows 7 story background (200 px)Among the stronger and more flourishing cottage industries that have sprouted forth as a result of Microsoft Windows has been documenting all of its problems. One of the most successful of these efforts has been Annoyances.org, which sprouted forth from "Windows Annoyances" -- much of what Internet publishers have learned today about search engine optimization comes from revelations directly gleaned from the trailblazing work of Annoyances.org. Imagine, if you will, if the instructions that Annoyances.org painstakingly gives its readers for how to eradicate those little changes that Microsoft makes without your permission, were encoded not in English but instead in a language that Windows could actually execute on the user's behalf.

    Windows 7 is actually making such an environment -- a system where, if you trust someone else other than Microsoft to make corrections to your system, you can accept that someone into your circle of trust and put him to work in Troubleshooting. Can't make that Wi-Fi connection? How do you test for the presence of other interfering signals? Streaming media suddenly get slow, or running in fits and starts? Maybe there's an excess of browser-related processes clogging up memory and resources. Did something you just install cause Flash not to work in your browser? Maybe you don't have time to check the 36 or so places in the Registry where that something altered your file associations.

    Windows 7's new Troubleshooting panel, as part of Action Center.

    Last year, Microsoft began making available to some Windows 7 testers what it calls the Troubleshooting Platform, and it's now being distributed as part of the Windows 7 SDK Release Candidate, downloadable today from Fileforum. It contains the development environment for a special type of PowerShell 2.0 script that has the ability to probe clients' systems in an effort to diagnose troubles -- these scripts are the Troubleshooting Packs. Third-party developers will be able to craft Packs that (if they follow the instructions carefully) will be named after the solution they provide rather than the trouble they diagnose.

    Then, using a system we still haven't seen even after the Win7 RC's release (there needs to be some Packs in the field first), Microsoft will be these Packs' ultimate distributor, using something called the Windows Online Troubleshooting Service (inevitably to be called "WOTS," we believe). Now, we imagine something akin to an "apps store" for troubleshooting, though there probably won't be a commercial incentive, at least not through WOTS. New Packs may be searchable by category (assuming the user's Win7 works well enough that he can peruse his system), and newly available packs can be set for automatic download -- a setting which assumes a rather Spartan selection in the future, at least from Microsoft itself.

    The complete set of currently available Windows 7 Troubleshooting Packs appears in the Control Panel.

    However, a white paper released by Microsoft in late February (Word document available here) leaves open the possibility that WOTS isn't the only place where Windows 7 and Windows Server 2008 R2 users may collect Packs. Addressing Windows network admins, the white paper says, "You can deploy troubleshooting packages, using Group Policy Preferences...to copy them to the local hard drive, or simply store then on a central file server." That implies that anyone can distribute a Troubleshooting Pack -- and if they do so themselves, perhaps they may charge for a collection of, say, 101 or 1,001.

    If a little red siren has gone off in your head, that's the usual security warning that accompanies anything Microsoft enables to be done to your system that's automated. It's good to be very, very skeptical. Although it's impossible to imagine such a system not being gamed or exploited at some point in Windows 7's lifecycle, the one very effective security mechanism built into PowerShell is script authentication. This enables script execution to be disabled unless the script writer's signature is trusted. This is important because, although a PowerShell user can disable that particular safeguard for the PowerShell environment, in the context of a Troubleshooting Pack, digital signing is required for Pack authors, and manual acceptance of that signature is required from the user before a Pack can be executed.

    What this could mean for the cottage industry built around post-catastrophic support and recovery, is a clarion call for metamorphosis. Rather than write and publish exhaustive 33-step instructions for sometimes mindless "wizards" (such as this little gem), someone with the know-how to ascertain the real problem using tools such as Windows Management Instrumentation (always available through PowerShell) can actually publish real, working solutions in the non-metaphorical sense.

    Download Windows 7 Release Candidate 32-bit from Fileforum now.

    Download Windows 7 Release Candidate 64-bit from Fileforum now.


    FOLLOW THE WINDOWS 7 TOP 10 COUNTDOWN:


    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/05/How_to_really_test_the_Windows_7_Release_Candidate'

    How to really test the Windows 7 Release Candidate

    Publié: mai 5, 2009, 8:31pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Download Windows 7 Release Candidate 32-bit from Fileforum now.

    Download Windows 7 Release Candidate 64-bit from Fileforum now.

    If you're like me...first of all, my apologies. Nevertheless, although you have more than one computer you use every day, you do the majority of your production work on a Windows XP Professional-based system. You could have upgraded to Vista, but you didn't, and it wasn't out of procrastination. It's because you knew the costs involved and the headaches that would ensue, and because you also knew from experience that Vista, in the end, was slower for everyday tasks than XP. Certainly it's more secure, but you have enough access to security software that you can stay vigilant and maintain your systems without serious trouble. Yes, that's if you're like me.

    The promise of Windows 7 is that it's the Windows XP upgrade you've been waiting for. The problem with Windows 7 is that it won't upgrade Windows XP, at least not directly. You can make the two-step jump from XP to Win7 through Vista, as we discovered a few months ago, but you need a Vista installation disc to do it. You don't have to register the Vista installation in-between, so you can legitimately borrow one from a friend.

    Microsoft Windows 7 story background (200 px)But if you're like me, then what you need to know from the Release Candidate is this: Will the software you use every day be able to survive the jump from XP to Win7 mostly intact, or would you actually sustain fewer headaches by installing Win7 on a fresh PC and reinstalling all your apps? And if the latter is the case, then should you even bother even a test upgrade?

    Here's the part where I can start congratulating you for being like me, because you're at least smart enough to know that your desktop production PC should have at least two hard drives, and that your documents and media should be located on the second one (drive D:). You keep your operating system and applications on a smaller, faster hard drive that's easier to maintain and upgrade. I keep XP and its apps on an 80 GB drive -- today, it's almost impossible to find an 80 GB drive new. So now have this option: You can purchase a relatively inexpensive hard drive that can easily hold two copies of your prime partition. A half-terabyte Barracuda costs about sixty-five bucks.

    Creating an image backup to an external USB drive is an almost academic process now. I use Acronis True Image as my backup program, and it hasn't failed me once -- you can download it from Fileforum, use it now, and pay for it later. You'll need a good backup program like this to take a snapshot of your C:\ drive that can be recovered later to your new, bigger hard drive without endangering the activation on your XP.

    Acronis comes with a recovery system that enables you to boot a special hard disk restoration environment from CD-ROM. You'll actually want to use this to restore XP to two partitions -- one just for using XP normally and doing your regular work, and the other for testing Win7 knowing you have a safety net if all goes wrong.

    As an alternative plan, if you have never used the BartPE system recovery environment, you may come to wonder why you haven't let Windows fail more often. You do have to build the recovery CD-ROM yourself, because it requires you to supply Windows XP as the kernel, and BartPE can't supply XP for itself. Once you've booted your system with BartPE in the optical drive and your new hard drive installed, you can use it to create your two partitions.

    Although technically you should be able to boot essentially the same operating system (for now) from two partitions, there's a chance you may need to manually edit the BOOT.INI file to ensure that there's two options when you boot from your new hard drive, and that the second option accounts for partition(2). Once that's worked out, you should be able to boot to the partition you've chosen for your Windows 7 test.

    We've heard two stories from different sides of Microsoft. Company strategist Mike Nash has been quoted as saying you should treat the RC just as though it were a final release; at the same time, we've heard it might not be desirable to "upgrade" from the RC to Windows 7 RTM once it becomes available. My suggestion is that, while you do go ahead and test your applications in the Windows 7 partition you've created, you do not plan to upgrade from the RC to Windows 7. Instead, make a note of the best practices you've learned from the business with the Win7 partition, and then be prepared to wipe the RC clean and start over with a fresh upgrade...if you take Acer's word for it, once October rolls around. As long as you keep your vital documents and media on a separate drive anyway, you do compensate a bit for all the trouble you're putting yourself through. (That said, you might consider backing up your documents and media anyway before proceeding, for reasons I'll talk about in a bit.)

    Next: What should you be testing for?

    What should you be testing for?

    Although Microsoft has published a handy reviewer's guide specifically for the Windows 7 Release Candidate (Microsoft Word file available here), its purpose is to highlight the new and enhanced features of the operating system. That makes sense if you're installing Win7 on a fresh physical or virtual machine; but if you're testing for usability, there will be a lot more to it. Even if you're just trying a test upgrade from Windows Vista and not XP, you will want to experiment with, and make notes about, the following:

    • The integrity of your system folders. As you may know, system folders since Windows XP have been convenient aliases for deeply nested subfolders, especially in the case of personal folders. Vista moved personal documents to the physical location C:\Users\username\Documents, and the fact that Win7 has changed the name of the alias to this location from "Documents" back to "My Documents" (as it was with XP) does not impact this location. However, although Microsoft-brand apps and others should have the least difficulty with handling system folder locations (requesting their targets from the API), other software may encounter difficulties knowing where your personal documents are located even though, physically, they haven't moved.
    • Microsoft Windows 7 story background (200 px)

    • The integrity of your MP3 and media files. This is another reason you may want to back up your documents and media as well: We all know that Microsoft is testing some new features in Media Player, some of which are...well, surprises. The other day, I encountered some "album art" in my Windows XP physical directories that I didn't put there myself; it just so happened that Media Player 12 in a virtual Win7, on its whirlwind, clandestine trip around the network, started cataloguing files within their native directories. One side-effect that some users of the earlier Win7 betas encountered was the unexpected lopping off the top of their audio files, by Media Player 12 as it adjusted the metadata of MP3 files...again, without notifying anyone. Supposedly this product was fixed by a Media Player update, but that's not to say something similar won't crop up again.
    • The efficiency of your security software. So far, this has been a largely unexplored subject with regard to the Win7 betas: How well will third-party security and anti-malware software work in the new system? Though there are no sweeping kernel changes as there was for Vista (Win7 is actually an in-generation Vista upgrade, like Windows 98 was for Windows 95), changes to system folder aliases and the addition of the new shared libraries feature may necessitate behavioral changes to even Vista-era anti-malware software. What's more, the new Action Center feature of Win7 is supposed to coordinate all types of security activities, including with third-party products; and existing products won't be prepared for such a coordinated effort.
    • The connectivity of your network components. The Homegroup Networking feature of Windows 7 is geared to connect networking components and other Win7-based computers to Win7-based networks. But you cannot use a Vista-based computer or older as a homegroup member, at least for now (conceivably, Microsoft could come up with a Vista upgrade, though it may simply choose not to). Win7-based homegroup members are supposed to behave better together than ever before, as well as stay compatible with workplace networks to which they may also belong from time to time. But you can only test this with two Win7-based components. However, if you have two computers, you could conceivably run one copy of Win7 in Virtual PC 2007, hosted by Vista or XP. You could then set up your physical Win7 machine as the nucleus, if you will, of the homegroup, and then set up libraries to be shared between the two. You can also then test Windows Media Center on the physical machine, especially to judge how well homegrouping aids in the promise of steadier streaming, especially over 11g and slower Wi-Fi connections.
    • The effectiveness of the revised automated troubleshooting. During the beta phase of a product's testing, companies (especially Microsoft) typically forego completing the documentation process, often leaving pages blank. Some type of automated troubleshooting has existed in Windows since XP; but in Windows 7 and Windows Server 2008 R2, Microsoft's trying an interesting new strategy, which is only barely alluded to in this document: It's enabling third-parties to create troubleshooting packages for known problems, letting others generate PowerShell and other scripts that could effectuate solutions to known and published problems. We may see the first trials of this approach (if someone other than Microsoft is brave enough to take the first steps) during the RC phase of Windows 7 testing.

    Essentially, what you should be looking for during the RC phase is the proper transition path between your old operating system and Windows 7. With an easily restorable backup of your old system in place, and a parallel version running in an alternate partition, you should be comfortable to experiment with ideas that might fail -- for instance, removing older-era anti-malware software, or installing old software you've used before and that you wanted to use with Vista but couldn't. Take thorough notes of your process, and make system restore points frequently. When you do encounter problems, consider them discoveries that you're glad you found now rather than later.

    Download Windows 7 Release Candidate 32-bit from Fileforum now.

    Download Windows 7 Release Candidate 64-bit from Fileforum now.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/05/Top_10_Windows_7_Features__9__Native_PowerShell_2.0'

    Top 10 Windows 7 Features #9: Native PowerShell 2.0

    Publié: mai 5, 2009, 12:19am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Special Series banner

    Microsoft Windows 7 story background (200 px)Ever since the command-line tool code-named Monad escaped by the skin of its fingernails from Microsoft's laboratories in 2006, there has been debate and dispute over whether the company has finally, once and for all, replaced DOS. Since that time, we've seen the arrival of an entirely new generation of Windows users who believe "DOS" is an acronym for "denial of service," and who are baffled as to the reasons why anyone would want to command or control an operating system using text.

    It isn't so much that text or command-line syntax is the "old" way of working and that Microsoft Management Console is the "new" way. As Microsoft discovered, to the delight of some in its employ and the dismay of others, using the command line as the fundamental basis for Exchange Server improved its usability and efficiency immensely. The graphical environment simply does not translate well -- or to be fairer, not effectively -- to the task of administration.

    As of last October and up through today, at least, Microsoft's often variable plan on PowerShell has been to include it with every SKU of Windows 7 and Windows Server 2008 R2. We will not be surprised if we suddenly discover it isn't included with the Home Basic edition, but its inclusion even there would be an indication that Microsoft is confident about being able to hand over the real-world equivalent of Doctor Who's "sonic screwdriver" to every Windows 7 user in existence.

    PowerShell can be used exactly like DOS, as it maintains a vocabulary of "aliases" that recognize DOS commands as alternates, but it is not DOS. Rather, it is a very sophisticated administrative language designed not for programmers or for veteran developers, but instead for admins and system managers who need the ability to communicate a lot of information in as little space as possible.

    Looking at just the front end, PowerShell takes the user right back to the TRSDOS era, with a blinking cursor beside a carat. But from there on, the resemblance to the 1970s becomes fleeting and momentary. You do not have to learn the MS-DOS batch file language to create a simple script that enables you to make a common fix. That doesn't mean learning PowerShell isn't a skill in itself -- it is, believe me. But its principles of consistency and economy of verbiage (something I myself may yet aspire to) enable anyone to theoretically craft and deploy a safe and powerful set of free tools that easily substitute for a myriad of commercial anti-malware and system tune-up programs.

    Here's an example I created for a tutorial on PowerShell for InformIT last November: Imagine in your mind an MS-DOS batch file whose purpose is to scan the running processes in Windows, find Windows Media Player, and make a log of the amount of available system memory whenever it does find that app. Assuming there were some kind of API for running Windows processes in DOS (and there never was), the amount of conditional statements and print formatting instructions would make such a script enormous.

    With PowerShell, I managed the entire concoction in just two instructions:

    $table = @{Expression={$_.VM};Label="Memory";width=10},@{Expression={$_.CPU};label="Uptime";width=10}, @{Expression={get-date};Label="Time";width=24}

    get-process | where {$_.ProcessName -match "wmplayer"} | format-table $table | out-file "mediaplayer.log" -append

    Now, there's a lot going on with these two instructions, thanks to PowerShell's ability to compress a lot of instruction in a little space. The first line, for example, actually creates a little database table in memory, and formats it for the columns needed for this little log file. Each of those columns is given a name, just as though we were building this table for MySQL or SQL Server. The format becomes a kind of template string in memory, which is given the name $table (TRS-80 veterans will remember the old BASIC language where the $ character fell at the end of the string variable name rather than the beginning).

    The second instruction is actually in four segments, and each segment passes on its results to the next by means of the pipe | character. Commands that are native to PowerShell are called cmdlets (pronounced "command-lets"), and they all have a verb-noun syntax. There are dozens of cmdlets that start with get, and because the syntax is consistent, it's easier to remember which command does what function. The get-process cmdlet is fairly self-explanatory, in that it generates a list of running processes. If the instruction stopped there, it would generate that list on-screen, but instead the pipeline takes the output to a where function that filters it. During this process, the output of get-process behaves like an object, as in "object-oriented language," so you can refer to the members of the object like properties. This script aligns those properties with the template created for the $table variable.

    Next: How far does PowerShell 2.0 go?

    Microsoft Windows 7 story background (200 px)The extent of PowerShell's language capabilities are actually almost overwhelming, due to the fact that it can connect with essentially everything in Windows. For example, since PowerShell is effectively a .NET language, the entire .NET intermediate type libraries are available to PowerShell -- so any running .NET program may be addressable as an object. Also, any program using the type libraries of the older Component Object Model may be addressable and manipulable through PowerShell -- and that includes all of Office 2007. The same document objects for Word and Excel, for instance, that were made to be addressable in VBA for macros, is also by definition addressable in PowerShell.

    But easily the most eerily cool feature -- the one that always makes newcomers fall backwards in their chairs the moment I show it off -- is the ability to make different types of navigable database structures "crawlable" like directories in DOS. Meaning, you can "mount" the Windows System Registry as though it were a disk drive (try this for yourself: cd HKLM:, for HKEY_LOCAL_MACHINE). Then you can copy, move, and delete keys and settings as though they were files, and tiers as though they were subdirectories.

    PowerShell 2.0 with the integrated development environment, in Windows 7 RC.

    Beginning with Windows 7 and WS2K8 R2, there will be two modes of operation for PowerShell. First there's the command line in use since the days of the Monad beta, and it can fully substitute for CMD.EXE. Win7 users will find it in their Accessories folder under Windows PowerShell.

    But there's also an attractive integrated scripting environment (ISE) that gives a script writer for the first time the kind of workbench that developers have had at their fingertips since Visual Basic 1.0. There's a tabbed scripting window for multiple scripts, with automated syntax highlighting and error checking; an "immediate window" for dropping little quizzes (or little bombs) into a running version of PowerShell; and a scrolling output window like paper tape.

    Now, the question that's always on skeptics' minds at this point -- with very good reason -- concerns security. The most influential viruses of the turn of the decade were simple VBScript files that masqueraded as different types of attachments in Outlook e-mail messages, and sending them to victims was, for the perpetrators, like shooting ducks in a gallery. Microsoft's greatest fear for the last several years is a repeat occurrence of the "ILOVEYOU" fiasco. In 2005, someone working with an early Monad beta generated a proof-of-concept of an early TRS-80 BASIC "virus" that a security software company immediately branded as the "first Vista virus," even though a) it infected no one, and b) it delivered no malicious payload even in the proof-of-concept.

    But that was before PowerShell implemented its key security feature, which to date has been at least as effective as the DHS has been in thwarting successive terrorist threats: By default, PowerShell cannot run scripts at all. You have to change the operating mode of the interpreter so that it can; and when you do, you can choose to only open the doors a little bit. For instance, you can have it only run scripts that have been digitally signed and authenticated by you and no one else. Or you can have it run scripts whose digital signatures you trust.

    Microsoft PowerShell architect Jeffrey SnoverLast October, PowerShell creator and Microsoft architect Jeffrey Snover told a story for his blog...and yes, he looks and sounds in person pretty much like he writes for his followers, especially when he spread the news of PowerShell 2.0 in Win7:

    "One of my moments of clarity came during one the security crises a few years ago. [Former Microsoft President] Jim Allchin set out an e-mail with instructions for how to configure your machine to avoid the problem and told us to get all of our friends and family to do the instructions. The instructions started with 'go the Start Menu then go to All Programs then.. then.. then… then… click…then…click…then…click…' OMG! I can't follow instructions so I keep screwing it up over and over again. I eventually got it done but then thought to myself, 'Wait -- I'm supposed to call up my folks and have them do this?' That is a phone call that never got made. I remember thinking, 'This is freaking crazy! If Jim gave me a command line, I'd just cut and paste it and be done. I could get my folks to cut-n-paste a command line!' There were only two problems with that story: 1) PowerShell wasn't installed on my folks machine and 2) PowerShell wasn't written at that time. We are now on a path were this is going to be simple and easy to do."


    FOLLOW THE WINDOWS 7 TOP 10 COUNTDOWN:


    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/05/01/Flashback_1990__The_debut_of_Windows_3.0'

    Flashback 1990: The debut of Windows 3.0

    Publié: mai 1, 2009, 10:24pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    This is most likely neither the first nor the last article you will read on the subject of Microsoft Windows 3.0. The attention being given the new product is not only deserved, but in many cases carefully orchestrated. The weeklies and fortnightlies have already extolled the merits of Win3's "three-dimensional" buttons, proportional text, and now-boundlessly managed memory. Their gold-star awards have no doubt been bestowed upon the product for being the best in its class, albeit the only product in its class. The "pundits" have already laid blame upon someone for Win3's alleged tardiness to market. The entire story is so well-patterned, it may be read without ever having laid eyes to the printed page.

    Yet if we follow the pattern, we miss the real story...

    It is May 1990. For several months, reporters had been prepared by Microsoft to cover what was being billed as the most important event in the history of software. It was the beginning, we were told, of the end of DOS, and the birth of a new software "ecosystem" that enabled independent developers to build graphical applications for the first time, without having to jump through the many hoops and stroke the countless egos of Apple. Microsoft would have a hands-off policy in the development of software that supports what was being called, for the first time, the Windows Operating Environment.

    Sure, it still used the MS-DOS bootstrap, but don't tell anyone that. And sure, that bootstrap still required 640K of conventional RAM, but don't tell anyone that either. The real benefits were to be seen in something Macintosh itself couldn't do: run more than one application at once, with true multitasking and pipelining for the very first time...and all in color.

    A screenshot from File Manager in Microsoft Windows 3.0, circa 1990.The prospects for applications were boundless, and Microsoft wanted to be seen as opening all the doors and not stepping through them first. The first question in journalists' minds was, would there be a counterpart to Hypercard? Without a Hypercard, Windows may as well be broken. Rest assured, we were told, a company called Asymmetrix would provide the toolkit that would revolutionize programming, with a bit of Microsoft's funding. The next generation metaphors for Windows were being created not by Microsoft but by Hewlett-Packard, for a product called NewWave -- again, Microsoft made certain journalists knew, with its help but not its supervision. And the world would know Windows was for real when it used an everyday spreadsheet with a name familiar to everyone: Lotus 1-2-3 G.

    In the spring of the turn of the decade, I had a regular series in a magazine that was widely considered to be "Computer Shopper in exile," called Vulcan's Computer Buyer's Guide, staffed by many former Shopper regulars who would, like myself, become regulars there again once a dispute with the new owners, Ziff-Davis, was resolved. I had the lead role in covering the biggest software release in history, for a magazine whose editors told me flat out, "Use as much space as you need."

    Months earlier, Microsoft had granted me some of the first demonstrations of Dynamic Data Exchange ever shown outside its laboratories. It was astounding to me, and I was proposing to write a book on it all, except that none of the book editors at the time knew, or appeared to care, about running two applications at once. "We want a book about Excel or a book about Word," one editor told me toward the close of a conversation. "No one wants to read a book about Excel and Word."

    It was uncharted territory, as every editor I worked with kept reminding me. One of these days, my former Shopper editor told me, you'll be writing this story in May and someone on the other side of the screen will read it in May. But for now, it was the August issue we were working with, and complete with interviews with everyone we thought would matter -- Asymmetrix, HP, and Lotus included -- I headed forward for 33 pages of draft copy, with a full head of steam...

    Yet if we follow the pattern, we miss the real story. There is a real development taking place between the authors of and for Windows 3.0, which concerns the remodeling of the computer application. We are familiar with the application as a program and its associated data, which is entered and exited like a jewelry store or a bank. We sometimes see ourselves "in" an application, just as we often see ourselves "in" the subdirectory pointed to by the DOS prompt. The data we need while we're "in" the program is much like the diamond necklace behind the display case; we're allowed to look at it and touch it, but unless we're very crafty, we're not allowed to take it outside. It doesn't belong to us, even if the data's very existence is due to our having typed it in.

    The entire contraption of the DOS environment -- along with the guilt feelings it so subtly leaves us with -- are being shattered by Windows 3.0. There is a movement under way by Microsoft and its independent software vendors (ISVs) to abolish the structure which grants exclusive ownership rights of a set of data to an application. Having done that, the movement will also seek to dissolve the programmatic barricade which surrounds the once-exclusive application, allowing for the equal distribution of correlated tasks within an arbitrarily-defined computing job, to other programs non-specifically.

    It's a difficult concept to discuss in the orchestrated fashion with which we have become accustomed, so instead I offer a hypothetical situation: Assume you have an inventory list card file. You want to compare gross profit percentages, so you demand such a list from the computer. Your spreadsheet -- whatever that may be -- shows you the list. You didn't need to save the card file, translate it, export it, and re-import it -- the list simply appeared. You want to see how these figures look graphically, so instantly you see a detailed pie chart. You'd like to make this chart part of your report to your superiors. This is quite simple to accomplish. Since what you're reading is the report to your superiors, the word processor saw your chart and automatically composed a standard form. This was sent to your typesetting program, which is providing the image you're seeing now.

    Your superiors are in six different countries and two of them are out on the road somewhere. you tell your machine to send them all a copy of the report you're looking at now. The machine already knows two of them have fax machines, two are available via WANs, and the other two have cellular phones connected to laptops. You neither know this nor care; you just have your computer "send" the report to them, regardless of the media of transmission. The report is received in six different places, even if the recipient computers' operators weren't using their machines at the time. A mere three minutes of your life have been expended in the processing of the weekly profit report.

    You have just been witness to an example of the model for the meta-application -- one smoothly-flowing, correlated process combining the resources of several programs from different vendors. This is the model Windows 3.0 seeks to gradually implement. Actually, this is what OS/2 was supposed to implement at first; its muddled and haphazard development agenda has prevented it from leading the way.

    The meta-application is not an inevitable fact of computing; the marketing debacles of cross-vendor cooperation it imposes may render it as ineffective as OS/2 in changing our computing habits. Still, it is something to be wished for. And it is a far more important facet of the Windows 3.0 story than faceted buttons and little pictures. The way in which world industry and commerce works is not affected in the least by faceted buttons and little pictures.

    Copyright Betanews, Inc. 2009

    Add to digg Add to Google Add to Slashdot Add to Twitter Add to del.icio.us Add to Facebook Add to Technorati
  • Lien pour 'BetaNews.Com/2009/05/01/Top_10_Windows_7_Features__10__Homegroup_networking'

    Top 10 Windows 7 Features #10: Homegroup networking

    Publié: mai 1, 2009, 6:59pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Special Series banner

    Microsoft Windows 7 story background (200 px)Beginning now, Betanews is going to get a lot more intimate with technology than you've seen us before, particularly with Microsoft Windows 7 now that it's becoming a reality. Next Tuesday, the first and probably only Release Candidate of the operating system will become available for free download.

    It's probably not so much a testing exercise as a colossal promotional giveaway, a way to get Windows 7 out in the field very fast...and use that leverage to push Vista out of the way of history. So much of what you'll see in the Release Candidate in terms of underlying technology is finalized; any tweaks that will be done between now and the general release date (which PC manufacturer Acer blabbed last night will be October 23) will likely be in the looks department.

    So with a reasonable degree of confidence that the Win7 RC is much more than half-baked, today Betanews begins a continuing series looking into what we believe to be the ten most important new features that Win7 brings to the table -- features that represent significant changes to the platform we've been calling Vista, and changes which appear very likely to be improvements. Maybe they should have been part of Vista to start with.

    There's no reason that the experience of setting up networking equipment at home should be a subset of the pain and misery businesses sustain when they toil and sweat over Vista. Business networking has evolved into a very complicated context that cannot be made simpler or more palatable or livable through the use of any metaphor you can come up with. You can't make Active Directory simple enough for everyday home users to want to wrestle with it, or even for sophisticated network admins to want to deal with the same dredge when they get home.

    In Microsoft laboratory projects that first came to light during the "Code Name Longhorn" project in 2003, engineers found themselves reasoning this way: There's only a few basic principles that home network users want to see implemented anyway. They want all their machines to share content with one another. They want any resource to be visible to the entire network (why would you want to hide a printer?). If they do mean to hide something from accessibility, users want the ability to do so explicitly, but only when it's necessary. They want portable components and devices to know they're on the network when they're in range or plugged in, and for the network to know when they're gone. And they want other people's equipment to stay off of their network.

    So the trust situations between home network components should be fairly straightforward. Thus rather than forcing home users to wrestle with enterprise-quality network resources, but just have them wrestle with it the same way every day until they get accustomed to it, the engineers came up with an idea called "Castle," whose legacy is a mention in Microsoft's pre-release privacy statement for Longhorn testers. Without invoking any part of Active Directory (and making the Windows Client far more cumbersome than it needed to be), this system created a kind of default home network user template that applied in most situations, creates the trusts that most users would expect, and gives users easier ways to adjust those trusts when necessary.

    Vista was so late to the game in getting anything even partly resembling Castle to market that only in Service Pack 2, which hasn't even been released to the public yet, will we see a feature called Windows Connect Now -- a facility that actually works just fine in Windows XP SP2 -- be implemented for the first time in Vista.

    One example of the radically simplified homegroup setup in Windows 7 RC.

    Finally, Windows 7 is giving this concept a try, with what's called the Homegroup (now with a lower-case "g," in keeping with the growing trend to remove unnecessary upper-case from product names). The basic concept boils down to this: If Win7 devices can identify themselves as being "at home" when they're on premises, then there's really no reason why their shared resources can't all be seen as unified. In other words, not "Scott's Pictures" and "Jennifer's Pictures" but "Pictures."

    Enrolling a computer as a homegroup member is a simple process -- so simple that reviewers of the earlier Win7 betas, for good reason, were skeptical that the security would be as porous as Windows XP. To become a member of an existing homegroup, one need only know the password, the default for which was generated when the first Win7 computer created the homegroup. For now, only Windows 7 computers can be homegroup members, and that will likely always be the case seeing as how WCN functionality was only just now added to Vista SP2 (unless there is an SP3 to come).

    Next: The promise of single media libraries...

    Microsoft Windows 7 story background (200 px)The real payoff from homegroups comes in the form of libraries, which is Win7's new aggregate view for shared system folders. Under this system, like content from multiple locations can be made accessible from a single resource to all members of the homegroup. While it seemed to make sense at first to segregate content in a home network in accordance with how accounts are allocated, the way things ended up, keeping track of locations as well as categories ("Pictures of Dad belonging to Jake," "Pictures of Dad belonging to Dad," etc.) became too much of a headache...the kind with which Vista eventually became permanently associated.

    Perhaps the true test of homegroups' and libraries' usefulness in Win7 will come with the new Windows Media Center for Home Premium and Ultimate users. Currently in Vista, WMC enables you to set up "watch folders" throughout a home network, presumably with the idea of being able to automatically enroll new content as it enters folders everywhere in WMC's purview. The problem is, not only is WMC watching those folders, but so are you, so you end up having to traverse the network directory tree to locate what you wand -- not unlike playing a game of Frogger blindfolded.

    Under the homegroup system, libraries that aggregate content throughout a homegroup will be visible to the new WMC as a single source. You want videos, you go to "videos." And conceivably (this is something I'll have to see myself to believe), a PC running the new WMC will be able to stream content from any member of the homegroup, to any member of the homegroup, almost as though WMC were a passive server.

    If you're a Media Center veteran, you may already be hearing the comfortable plinking sound of unspent coins being returned to your piggy bank. There has actually been a cottage industry in Media Center Extender devices being sold to individuals who, technically, didn't actually need them. The MCE is supposed to make networked devices accessible to WMC, and some devices like external hard drives do so legitimately. But many such devices -- especially the ones that promise to stream photos, music, and videos to any PC in the house -- are essentially stripped down Wi-Fi adapters, some of which are being purchased by folks who already have Wi-Fi adapters.

    In a homegroup-endowed world, these particular customers would not need MCE devices; they'd use the routers they already own to let Windows do the job that it was supposed to do in the first place.

    From the perspective of a Windows engineer, the biggest barrier the homegroup system may overcome is that of enrolling portable PCs as homegroup members while they're in the home, and yet enabling them to be domain members while they're in the workplace. Even with Vista, this was essentially impossible even though its newer TCP/IP stack included setup for alternate IP locations. I personally wrestled with this issue to no avail; at present, it's impossible for a business' laptop PC that uses a VPN to also be a member of a local Windows 3.11-style workgroup; it can be one or the other, but never both.

    The promise of Windows 7 is that laptops may be transported to work, become "business PCs," and be enrolled with all their enterprise-level Active Directory privileges; then be taken home, become "home PCs," and be open to all the family's shared files, aggregate libraries, and other conveniences; and ne'er the twain shall meet. This will be an extremely tall order, which if fulfilled, will be fabulous: Corporations' policies for the use of company equipment, or even personally-owned laptops with access to company resources, only tightened during the Vista era.

    If the computer truly is the network, as folks like Bill Gates have been saying for decades, then perhaps part of what had been plaguing Vista all this time is due to home users' perception of the task of networking. The homegroup system is a big gamble to address and solve this perception problem, and with all its promise, it can either succeed spectacularly or fail spectacularly. We'll probably be seeing some of the spectacle long before the final release date.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/30/XP_Mode_is_for_real__First__Windows_Virtual_PC__beta_accompanies_Windows_7_RC'

    XP Mode is for real: First 'Windows Virtual PC' beta accompanies Windows 7 RC

    Publié: avril 30, 2009, 10:18pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Microsoft Windows 7 story background (200 px)Validating the news we received last week of the existence of a virtualization layer, Microsoft this morning unveiled for MSDN and TechNet subscribers the first beta a new and special edition of its virtualization software specifically for Windows 7. Its first release candidate went live to those subscribers also this morning, and will be available to the general public next Tuesday.

    Windows Virtual PC already has its own Web site. It's the next edition in the chain whose current version is called "Virtual PC 2007," although this time, the software is specifically geared for Windows 7, and for computers with virtualization support in hardware. That covers nearly all modern CPUs anyway, but specifically Intel-brand CPUs with Intel-VT and AMD-brand CPUs with AMD-V.

    Think of the new "Windows Virtual PC" (WVPC) as Hyper-V for the client side. Although the technology credited with this innovation is still being called Microsoft Enterprise Desktop Virtualization (MED-V), the new Web site is reporting that WVPC will be supported on all Windows 7 SKUs including Home Basic and Home Premium. "XP Mode," however, will only work on Enterprise, Professional, and Ultimate SKUs.

    A network application that requires Windows XP appears to run fine in Windows 7 under 'XP mode' virtualization.  [Photo credit: Microsoft Corp.]

    A network application that requires Windows XP appears to run fine in Windows 7 under 'XP mode' virtualization. [Photo credit: Microsoft Corp.]

    Up to now, Virtual PC users have been accustomed to hosting guest desktops within a hypervisor layer. Betanews uses Virtual PC 2007 (and Sun VirtualBox for hosting 64-bit Windows and Linux systems) almost every day in testing. The new version is very obviously being geared for everyday use by general users rather than testers like us. Borrowing a cue from its application virtualization, sometimes called SoftGrid, the new WVPC will enable some guest environments to seamlessly integrate with the host desktop.

    That feature will get the most use in conjunction with what's being called "Windows XP Mode." It will be distributed as a kind of drop-in, apparently containing the XP kernel. Setting up this drop-in with WVPC will apparently be "wizard-ized," with some functions automated -- so it won't be like installing Windows XP on a PC, a process that nobody in his right mind really wants to re-live. Once completed, users should have the ability to run Windows XP programs that have misbehaved in Vista up to now, within an envelope more conducive to XP, but without separating the "XP realm" from the "Windows 7" realm.

    That's especially important because typical hosted environments run only from virtual hard drives. XP Mode will be able to coexist with the user's regular physical drive, sharing either or both drive letters and permitted directories, as well as contents of the system clipboard.

    One way you will definitely be able to spot an XP program, however, is by looking for that cobalt-blue window frame we all remember...and grew a little tired of.

    The new edition of the virtualizer software will also support XP, Vista, and Win7 itself in the traditional hosted desktop. But also according to Microsoft, "applications modes" for Vista and also Win7 (for instance, disabling a tested application's ability to contact the host OS) are also feasible, even though XP Mode will be the only drop-in available at present.

    One very big question that Microsoft has not yet addressed -- and which Betanews is pressing the company on right this moment -- concerns licensing. While it appears WVPC itself will remain free just as Virtual PC 2007 has been, the XP Mode drop-in appears to contain the XP kernel. So will users who have already purchased and activated XP at some point in their lives, have to purchase it again? Or will they get a discount? Or can users who have a real XP installation disc use it to validate their ownership? Once Microsoft gets back to us on those questions, we'll let you know the answers.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/30/Windows_7_RC_now_being_distributed_to_MSDN__TechNet_subscribers'

    Windows 7 RC now being distributed to MSDN, TechNet subscribers

    Publié: avril 30, 2009, 5:06pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Breaking News

    Microsoft Windows 7 story background (200 px)The first "real" copies of Build 7100, the Windows 7 Release Candidate -- quite likely, the only one there will be -- were officially distributed to subscribers to Microsoft's MSDN and TechNet subscribers at 11:00 am EDT / 8:00 am PDT Thursday morning. Included in this morning's distribution are the 32- and 64-bit editions of the Ultimate SKU of the operating system, plus the all-new Windows Driver Kit Release 7 for those who'll be building device drivers for the new OS using the revised driver model; the Automated Installation Kit for remote deployments using servers; and the updated Windows 7 SDK RC in x86, x64, and Itanium editions.

    Update banner (stretched)

    11:15 am EDT April 30, 2009 - Almost immediately upon the RC's public release, the response time for Microsoft's Web services became extremely slow. It's a good sign for the company in one respect: Not all of Microsoft's developers took the bait and downloaded one of last week's leaks.

    11:35 am EDT - The slowdown lifted about three minutes ago, and downloads resumed at a respectable pace -- fair enough when something this important and popular is happening.

    5:08 pm EDT - Almost immediately after installing Windows 7, you're given some fresh hints and clues -- obviously quite deliberately -- that you're not using Vista (or XP) any more. One is the first notification of the existence of the Action Center, the new upbeat, centralized component for handling and monitoring system security matters. It lets you know it's there for the first time, in a fresh system, by reminding you that there isn't any antivirus software installed.

    The Action Center panel introduces itself for the first time to the new Windows 7 user.

    Another nice feature that hasn't gotten a lot of play, but which suggests folks at Microsoft have finally been listening to users: After installing applications and rebooting, Windows 7 is capable of restoring open applications to the state they were before the reboot. We've seen this behavior so far with Internet Explorer 8 and with other Windows 7 apps open, such as the new version of Paint (with the "Scenic Ribbon"), but we're interested in how deeply this behavior can extend to other apps including non-Microsoft brands.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/30/Time_Warner_may_or_may_not_spin_off_AOL__says_SEC_filing'

    Time Warner may or may not spin off AOL, says SEC filing

    Publié: avril 30, 2009, 4:47pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Early morning news reports told readers that Time Warner has begun the process of spinning off its AOL division into a separate entity. The earliest versions of those reports did not this time cite unnamed or anonymous sources, or wireless microphones attached to rats traversing the air ducts of the headquarters building, but instead this morning's regulatory filing with the US Securities and Exchange Commission.

    As it turns out, that's not exactly what the filing says at all. A Time Warner analysts' briefing this morning will likely lay out the details, but here is what we know based on the source that was actually cited: Time Warner's board of directors has not reached a decision with regard to whether it wants to spin off the AOL unit to TW's shareholders or to anyone else, although the "Company" (read: executives) believe that such a move is probable. However, everyone acknowledges that there may be other possibilities in the works. Here is the complete passage in question:

    During 2008, the Company announced that it had begun separating the AOL Access Services and Global Web Services businesses, as a means of enhancing the operational focus and strategic options available for each of these businesses. The Company continues to review its strategic alternatives with respect to AOL. Although the Company's Board of Directors has not made any decision, the Company currently anticipates that it would initiate a process to spin off one or more parts of the businesses of AOL to Time Warner's stockholders, in one or a series of transactions. Based on the results of the Company's review, future market conditions or the availability of more favorable strategic opportunities that may arise before a transaction is completed, the Company may decide to pursue an alternative other than a spin-off with respect to either or both of AOL's businesses.

    The final sentence in that passage implies that a formal review of the possibility of a spinoff has not even happened yet, and that's the first step that executives would actually need to take before going forward with a plan. While executives are probably in favor of that direction -- and with good reason -- there's still viable reason for hesitation, most notably the question of whether a detached AOL, even without its less profitable dialup services, would garner a high enough market value for trading in this dismal economic atmosphere.

    Putting a damper on this plan (and perhaps having done so since the beginning of this year, as we've learned only now) is that back in January, Google exercised its right to ask Time Warner to sell its 5% equity stake in AOL in an initial public offering. This according to the SEC filing and not a blogged interpretation of the SEC filing. That's a sign that Google wants to cash out. According to the filing, Time Warner had the right to pre-empt an IPO of that 5% stake by issuing its own bid for the stock instead, which it has decided to do. That's probably smart if the company wants to eliminate any chance of market perception dragging that stock value lower, even though it means an expenditure on TW's part.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/29/AMD__We_didn_t_say_anything_about_Nvidia_licensing'

    AMD: We didn't say anything about Nvidia licensing

    Publié: avril 29, 2009, 12:03am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Last week, after AMD's conference to reporters last Wednesday updating its roadmap for server CPUs, we reported that the licensing situation for Nvidia and Broadcom chipsets for use in AMD-based servers looked bleak. This afternoon, AMD spokesperson Phil Hughes contacted Betanews to say that the company made no comment with regard to licensing, and continues to make no comment.

    "We haven't made any comment with regard to licensing," stated Hughes. He reiterated Server Business Unit Vice President Pat Patla's comment that AMD has only made a decision to go with AMD-branded chipsets for use in motherboards built for new Opteron processors. But when we asked Hughes whether licensing played any role in AMD's decision to only use AMD chipsets and not extend licenses to Nvidia or Broadcom, Hughes repeated that the company has made no comment with regard to licensing, only that it has chosen to use AMD chips for this purpose.

    "Our decision to go with AMD chipsets is strictly a business decision," Hughes said.

    We listened once again to the webcast, where Patla responded to a question from the audience. The question itself was inaudible, but Chief Marketing Officer Nigel Dessau told Patla, "Let me just repeat the question, it was about Nvidia, their chipset."

    Here is how Patla responded entirely: "So for 2010 moving forward, the solutions coming out from AMD will be AMD and on AMD at this time. We don't expect to see new chipsets from Nvidia or Broadcom for server implementations in 2010."

    The follow-up was clearer: "But they will continue to support those platforms?" "All existing platforms moving forward through 2010," Patla responded. Indeed, the words "license" and "licensing" were not used in Patla's response.

    This afternoon, we reiterated our question to AMD's Hughes: The decision not to license chipsets to Broadcom or Nvidia has nothing to do with licensing? And Hughes reiterated his response, which is that AMD is not commenting on the licensing situation.

    Copyright Betanews, Inc. 2009


  • Lien pour 'BetaNews.Com/2009/04/28/Office_2007_SP2_is_released__can_indeed_save_ODF_by_default'

    Office 2007 SP2 is released, can indeed save ODF by default

    Publié: avril 28, 2009, 10:11pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Download Microsoft Office 2007 Service Pack 2 from Fileforum now.

    Now all Office users will have the option to load and save OpenDocument files, with today's distribution of Service Pack 2 of Office 2007. In something of a surprise -- contrary to what many at Microsoft led us to believe -- upon installing SP2 on our test systems, we immediately located an option for saving files in ODF by default. That means you don't have to "Save As" and export to ODF if you don't ever want to use Microsoft's OOXML or Office 2003 "compatibility mode;" you can at least try to use Word, Excel, and PowerPoint as substitutes for OpenOffice.

    Not that Microsoft won't give you a little heck for it along the way, in classic Microsoft fashion. For example, after changing our default format to ODF, we tried saving a simple Word file that had nothing more than a single sentence of placeholder text, nothing else. Immediately we saw the first security warning, "Document1 may contain features that are not compatible with this format. Do you want to continue to save in this format?" The check box at the bottom of the dialog suggested to us that we would see such a dialog each and every time, unless and until we checked "Don't show this message again." That's Microsoft's little way of saying, don't blame us if your documents don't turn out 100% the way you expect them to.

    The little warning that Microsoft gives you when you try to save an Office 2007 file as ODF.

    The next little surprise is that the default save format is not the same as the default load format. So after you've saved your Word document as an ODT, clicking on the Office button and selecting Open gives you the usual list of files saved in Microsoft formats. In the file selector box, under Files of type, you have to scroll to the middle to see OpenDocument Text (*.odt) -- the list is not in alphabetical order, so that entry falls below XML but above WordPerfect.

    Here's how to set the default save format in SP2: Click on the Office button (the big round logo in the upper left corner) and from the bottom of the menu, select Word Options. In the dialog box, from the left pane, choose Save. Then from the list box marked Save files in this format, choose OpenDocument Text (*.odt) (in Word 2007, or its equivalent in Excel or PowerPoint). Then click on OK.

    We expected to see OpenDocument given equal treatment along with PDF, the portable document standard created by Adobe and now treated as vendor-neutral. While you cannot "open" a PDF document in Word (that's to be expected, Word and Acrobat aren't exactly in the same category), you can use Save As to export an open document to a .PDF file. If you're an Adobe Acrobat Professional user, you've already had an easier option added to your Office menu by Acrobat itself: Save As -> Adobe PDF. However, for non-Acrobat users who may only have Adobe Reader, this extra step does save you the hassle of installing something like "PDF Writer" as a printer driver, and printing as a means of exporting.

    Office Live services have been obviously boosted in priority with SP2. Up to now, using the Office Live add-in has resulted in adding an extra place to your Save As dialog box, enabling you to directly load and store your Office documents to your personal storage space in Microsoft's cloud. Now with SP2, Open from Office Live and Save to Office Live become prominent selections in the Office menu, and the controls for signing into your online workspace are embedded into the menu -- no shoving off to IE just for the sign-in screen.

    UPDATE 5:42 pm EDT April 28, 2009 - Or maybe not. Upon further testing, we learned that the Office Live functionality does not come with SP2 right away. Rather, in our first tests earlier today, the functionality showed up on virtual systems where Office Live had been installed before, and then uninstalled. But on systems where Office Live had never been installed before, we noticed no such new functions. So evidently the SP2 package updates systems where Office Live has been registered before by way of the Office Live add-in -- which, by the way, Microsoft now also offers through Automatic Updates.

    Service Pack 2 rolls up a truckload of security-related attachments and bug fixes since December 2007 -- the release date of SP1 -- and also fills an important gap in Word's and PowerPoint's functionality.

    Users of Microsoft Update this afternoon noticed "The 2007 Microsoft Office Suite Service Pack 2," complete with the "The," as a high-priority or important update along with Internet Explorer 8, which is now being pushed as an operating system update for the first time.

    Here's some advice for you: Download Office 2007 SP2 either from our Fileforum page or using Automatic Updates, and not from a link you find on Microsoft.com. The latter is what we did, and that choice prompted us to download the latest version of Microsoft Internet Download Manager, whose performance -- at least we hoped -- was not a foreboding of things to come. First, it crashed our Firefox browser, taking with it the active session (for that reason alone, you may want to just use IE to download SP2). But when the browser was restored, we were shown a page that merely asked us to execute the just-downloaded installer file before proceeding. Not a kind way to give us the message.

    Anyway, we did that and restarted Firefox. Then we noticed that the Download Manager failed to properly register itself; had it been registered properly, it would have detected that it was supposed to handle the automatically triggered download. As it turned out, the Registry triggered Visual Studio 2008 instead, which doesn't make for a very good download platform. We exited out of that and patched the Registry manually. Then and only then were we able to make the download properly, except that for a reason we can't fathom, the Download Manager downloaded the file twice. Remind us to give Download Manager 5.17 a low Fileforum score when we get a minute.

    We're continuing to dig into the new features of SP2, including the long-awaited addition of the module that adds the same new charting functionality introduced in Excel, to Word and PowerPoint. We're also looking forward to seeing whether a window updating bug that's peculiar to Nvidia drivers has been addressed. We're only moments into our examination of SP2, and we'll let you know more when we do.

    Copyright Betanews, Inc. 2009


  • Lien pour 'BetaNews.Com/2009/04/28/IE8_now_being_delivered_as__Important_Update__for_Vista___High_Priority__for_XP'

    IE8 now being delivered as 'Important Update' for Vista, 'High Priority' for XP

    Publié: avril 28, 2009, 8:39pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    A few weeks ago, Microsoft made indications that it would be delivering Office 2007 Service Pack 2 and Internet Explorer 8 as important automatic updates to Windows users on the same day. That day ended up being today, and now many Windows users are being prompted for the first time to install IE8 as an update to their operating system. Since the product's release last month, upgrades have only been voluntary.

    Though two-thirds of the world's Web traffic is attributable to browsers identifying themselves as Internet Explorer, according to the latest up-to-the-minute data from analytics firm NetApplications, under 5% of that traffic comes from IE8. In fact, only in the last week has IE8 traffic by NetApplications' measure eclipsed HTTP requests hailing from Apple Safari version 3.2, which runs on Mac, iPod Touch, and iPhone. Requests from Mozilla Firefox 3 accounts for nearly one-fifth of analyzed traffic; but now, with IE8 becoming an "in-your-face" update for the very first time, Internet Explorer traffic in total may experience a bump.

    Download Internet Explorer 8.0 for Windows Vista from Fileforum now.

    Copyright Betanews, Inc. 2009


  • Lien pour 'BetaNews.Com/2009/04/28/Firefox_3.5_Beta_4__Mozilla_delivers_the_speed__as_Beta_5_gets_under_way'

    Firefox 3.5 Beta 4: Mozilla delivers the speed, as Beta 5 gets under way

    Publié: avril 28, 2009, 5:41pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Download Mozilla Firefox 3.5 Beta 4 for Windows from Fileforum now.

    Test Results

    There are now (once again) three simultaneous development tracks for Mozilla's Web browser, as the first public beta of Firefox to be numbered 3.5 has officially hit the streets; the first private Beta 5 of Firefox 3.5 is being distributed to Mozilla testers; and the latest Firefox 3.6 Alpha continues to make headway.

    It would appear Mozilla developers learned a lot from last week's code-frozen version of 3.5 Beta 4, as Betanews tests indicate the organization kicked things up a gear. The last code-frozen version before the public build produced a composite performance index score of 9.19 -- that's 919% the performance of Microsoft Internet Explorer 7 (not IE8) in the same system. But after refreshing our test virtual machine with the public Beta 4, that index kicked it above the 10.0 mark to place at 10.44.

    How come the final build is faster? Much faster error handling, as shown by the browsers' scores in the Celtic Kane JavaScript test, which indicates that some of the temporary error handling code was likely removed; and generally better performance in all stages of the SunSpider benchmark.

    Windows Web browser performance index scores April 28, 2009.

    The public Beta 4 actually posted a slightly better performance score than the first nightly build of Beta 5, which posted a 10.33. Beta 5's Acid3 standards compliance score holds even with Beta 4 at 93%, and its Celtic Kane and HowToCreate.uk CSS rendering test scores hold up well against Beta 4. But the SunSpider scores show a general slowdown in most departments by 20 - 25%, which indicates that Beta 5 will be mostly focusing on features and usability rather than functionality.

    And though the latest nightly build of Firefox 3.6 Alpha 1 ("Minefield") gained a tick in speed with a 10.25 index score -- compared to last Thursday's 10.14 -- it's been bested by 3.5 Beta 4, thanks in large measure to its improved JavaScript error handling. Despite our early fears that the new TraceMonkey JavaScript interpreter wouldn't perform in the same league as Google Chrome and Apple's Safari 4 beta, Mozilla has been able to dial up the speed considerably, posting scores that are now 33% than in our first tests of Firefox 3.1 Beta 3.

    Still, a score above 10.0 puts Firefox in a very competitive position against Chrome, currently the second fastest browser in our tests, but only by a small margin against the Safari 4 beta. Both Chrome's and Safari's index scores of 14.39 and 13.07, respectively, are helped by the fact that they post 100% compliance scores in the Acid3 test.

    While all this is going on, Mozilla has now publicly announced the release of Firefox 3.0.10, which may now be automatically downloaded using older versions. We'd been noticing some poor performance since last week's release of 3.0.9, whose reign didn't last so much as one week. Since yesterday, we managed to trace the source of our hanging problems -- thanks in large measure to Mark Russinovich's Process Monitor -- to a corrupted bookmarks file, specifically places.sqlite. Deleting the file and having Firefox rebuild it appears to have solved this problem for now, though we cannot say for certain yet whether Firefox 3.0 or one of our add-ons is responsible for the corruption. Download Mozilla Firefox 3.5 Beta 4 for Linux from Fileforum now.

    Copyright Betanews, Inc. 2009


  • Lien pour 'BetaNews.Com/2009/04/27/EC_s_Reding__Europe_needs_a__Mr._Cyber_Security_'

    EC's Reding: Europe needs a 'Mr. Cyber Security'

    Publié: avril 27, 2009, 10:22pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    After an apparent victory in her efforts to prevent the UK from establishing a central database for private citizen communications, European Commissioner for Information Society and Media Viviane Reding said she wants her government to create a post for a point-man for the continent's cybersecurity.

    "Although the EU has created an agency for network and information security, called ENISA, this instrument remains mainly limited to being a platform to exchange information and is not, in the short term, going to become the European headquarters of defense against cyber attacks. I am not happy with that," stated Comm. Reding (PDF available here). "I believe Europe must do more for the security of its communication networks. Europe needs a 'Mister Cyber Security' as we have a 'Mister Foreign Affairs,' a security tsar with authority to act immediately if a cyber attack is underway, a Cyber Cop in charge of the coordination of our forces and of developing tactical plans to improve our level of resilience. I will keep fighting for this function to be established as soon as possible."

    The news comes as Reding meets with other government leaders in Estonia this week, to debate not only a pan-European policy for Internet security, but also the broader topic -- one that's near and dear to her heart -- of the establishment of some form of Internet governance, a topic she'll have more to speak about next week.

    In the meantime, the UK Home Office decided this morning to back down from its plans to establish a central database for logging communications between private citizens -- a database which would have been contributed to by the country's Internet service providers. This after the EC issued a formal warning to the British government last week that it could go so far as to take it to court in Brussels, to protect against the possibility of any individual misusing such a database for unauthorized purposes.

    In a communiqué issued this morning by the British Home Office (PDF available here), Home Secretary Jacqui Smith essentially echoed some of the language of Comm. Reding's earlier statement: "For the police, the security and intelligence agencies, and other public authorities like the emergency services, being able to use the details about a communication -- not its content, but when, how and to whom it was made -- can make all the difference in their work to protect the public," states Sec. Smith. "It is no exaggeration to say that information gathered in this way can mean the difference between life and death. However, rapid technological changes in the communications industry could have a profound effect on the use of communications data for these and other purposes. The capability and protection we have come to expect could be undermined."

    UK Security and Counter-terrorism Minister Vernon Coaker (L - Gedling) had suggested that the creation of a database was necessary in order to comply with an EU directive mandating that personally identifiable information be kept on hand for 12 months. Some saw that as a way of sneaking in new government oversight, while passing the blame onto a higher authority. Although this morning's communiqué cited the European Convention on Human Rights, Article 8(1) ("Everyone has the right to respect for his private and family life, his home and his correspondence"), it then went on to say that the government ensures that the content of private communication may only be accessed by authorities under certain emergency circumstances.

    Amid those circumstances, it listed maintaining the economic well-being of the UK in such instances where national security may be jeopardized, and assessing whether taxes are owed by an individual. Still, it maintains that safeguards are in place to determine whether such cases mandate privacy invasion; and when they do, only a certain specially trained team of elite investigators are allowed to dive into private communications -- a team that sounds like something out of a Jerry Bruckheimer series, and that uses an acronym that must have been unavoidably tempting.

    "The single point of contact system (SPoC), extended beyond police to all relevant public authorities following the enactment of RIPA, created trained and accredited experts in each public authority who understand how to interpret the information that is held by communications service providers," reads the communiqué. "This group, trained partially by industry to know what data is available to support investigations, helps to ensure effective working relationships between investigators and companies."

    Already, the UK government has a kind of "tsar" in place to serve as the single point of contact, if you will, in cases where the government's authority may be under dispute, says the communiqué. This is the Interception of Communications Commissioner, who by law must have served as a judge. However, if a citizen feels her or his private data has been abused by authorities, he may seek redress before the Investigatory Powers Tribunal.

    The Tribunal's own Web site describes itself this way: "The Tribunal can investigate complaints about any alleged conduct by or on behalf of the Intelligence Services -- Security Service (sometimes called MI5), the Secret Intelligence Service (sometimes called MI6) and GCHQ (Government Communications Headquarters). Because the Tribunal is the only appropriate place you can complain about the Intelligence Services, the scope of conduct it can investigate concerning them, is much broader than it is with regard to the other organizations under its jurisdiction."

    Copyright Betanews, Inc. 2009


  • Lien pour 'BetaNews.Com/2009/04/27/One_week_later__it_s_time_for_Firefox_3.0.10'

    One week later, it's time for Firefox 3.0.10

    Publié: avril 27, 2009, 9:22pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    It's only been six days since the emergence on the scene of Mozilla's Firefox 3.0.9 -- ostensibly a major security and bug-fix update to the world's #2 browser -- and already the organization is preparing another update. Once again, no formal announcement has been made, though version 3.0.10 has appeared on the organization's FTP site for final preparation.

    The emergence of yet another update follows a week of lackluster performance from the production version of Mozilla's browser in Betanews tests. Not only did release 9 lose some speed and performance, we noticed -- as we have from time to time with Firefox 3 -- the re-emergence of a memory leak that can leave the entire browser in the online equivalent of a coma. Release 10 may not have come too soon; already, we noticed a kick in its step, gaining back what it lost performance-wise in Betanews tests, especially in the SunSpider benchmark. Release 10's performance score now stands at 5.19, which is actually higher than for Release 7 -- meaning, combining multiple tests, we find Firefox 3.0.10 to perform 519% better than Microsoft Internet Explorer 7 (not IE8) in the same system.

    It's been a tough week for Mozilla, as planners decided to push back the Beta 3 release of its Thunderbird e-mail client by an indeterminate number of weeks, on account of unresolved bug issues ("blockers"). We also still await word on Firefox 3.5 Beta 4, a public release that could be the organization's best performing browser to date.

    Download Mozilla Firefox 3.0.10 for Windows from Fileforum now.

    Download Mozilla Firefox 3.0.10 for Linux from Fileforum now.

    Copyright Betanews, Inc. 2009


  • Lien pour 'BetaNews.Com/2009/04/27/It_s_finally_settled__Broadcom_and_Qualcomm_lay_down_their_swords'

    It's finally settled: Broadcom and Qualcomm lay down their swords

    Publié: avril 27, 2009, 5:51pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    The mobile communications industry can now pursue 3G technologies without fear of being caught in a whirlwind patent dispute. That's the outcome reached last weekend when Qualcomm agreed to settle its remaining disputes with Broadcom, in a deal which (at least at first glance) will net Broadcom $891 million over four years.

    Qualcomm's incentive to settle was becoming obvious. In the current economy, a business plan built on the hopes of revenue from a big settlement from Broadcom or an unprecedented judgment in its favor, were becoming untenable. That point was driven home this morning when the company released its quarterly revenue numbers: Though revenue only declined by a mere 5.8% annually -- actually a noteworthy achievement -- net income could have ended up as high as $702 million, a decline of 22%. But the cost of putting this case behind it -- which only includes its initial payment to Qualcomm -- cost Qualcomm $748 million in this quarter alone, forcing it to post a net loss of $46 million.

    So did Broadcom win? If it has, then it hasn't won very much, certainly not in terms of prestige or honor or upholding the principles it tried to appear to stand up for not so long ago.

    As it turned out, this case followed what's now becoming a familiar pathology: One side claims it was wronged, and is acting for the good of the entire industry by standing firm against the designs of an evildoer. Then the other side countersues, and in so doing, exposes the dirty laundry of the plaintiff, creating an ironic twist that makes it difficult for anyone to stand on principle only. Eventually, both sides call it off out of embarrassment, and in an effort to diffuse the bombs they themselves have created, jointly proclaim their principles weren't so big a deal after all.

    Had either Qualcomm or Broadcom actually "won" this dispute, the rules which govern the production of 3G cellular communications would have changed drastically. But after nearly five years of this battle, the best either side was managing to achieve was a calculated stalemate.

    If all you had read about this case was last night's joint statement, you'd never know that at the center of this argument were many of the same issues that gave such color to the debate over Microsoft's effort to standardize its Office software file format: Can a corporation use its influence over standards agencies to force an industry to adopt proprietary technologies? The first sign of a real crack in the dam came in August 2007, when US District Judge Rudi Brewster effectively ruled that Qualcomm did use its influence as a member of the industry-wide Joint Video Task force to drive adoption of certain elements of the H.264 video encoding standard that led directly to the use of Qualcomm patents. Then it used litigation tactics to shield that misconduct. That much of Judge Brewster's finding did survive appeal last December.

    But the issue at the center of Qualcomm's countersuit was similar: Although both Broadcom and Qualcomm manufacture components for all types of 3G handsets, Qualcomm has been the champion of CDMA while Broadcom is perceived as the standards-bearer for the European preference, GSM. Both standards are capable of utilizing the same alternative approach to a concept proposed by Motorola for enabling the same spectrum to carry more simultaneous signals. But that approach, called time-division multiple access, is part of the heart of GSM technology, while it may or may not be a co-existing add-on to CDMA (code-division multiple access). Think of CDMA as spreading signals over a wider spectrum, while TDMA allocates time-slices for specific signals within a given band.

    Qualcomm claimed it had developed TDMA first, and that Europe's and Broadcom's adoption of it without compensating its rightful creator, was unfair. Meanwhile, Broadcom has been working since the previous decade to utilize the same concept in 4G technologies, as Advanced-TDMA (which can co-exist with DOCSIS 3.0), demonstrating that TDMA was not simply a solution to a dilemma specific to CDMA.

    While some observers have stated their belief that the whole 3G topic is dying anyway, and that's why both sides decided to give things a rest, the fact is that when you look at the growth of portable technologies with respect to how much data they deliver, 3G is growing faster than 4G, and may continue to do so for the foreseeable future. This was a conclusion drawn last February by Cisco, which sees global traffic growth for both 3G and 4G technologies growing pretty much in proportion with one another, at a rate of over 100% per year up through 2013, with 4G growing at a slightly greater rate. That's not so much a factor of the number of users as how much data they consume.

    So last weekend's settlement sets up the new rules for a continuing market in 3G technologies. Both sides agree that if a manufacturer builds a handset or a device using either Broadcom or Qualcomm products, that manufacturer does not immediately become the customer of the other manufacturer. That may be the most important decision the two companies have made, for one reason because now manufacturers such as Motorola and Nokia -- Qualcomm's other favorite courtroom nemesis -- won't have to set aside funds in case they end up owing back fees for licenses they didn't know they'd have to purchase.

    But the other reason is because the two companies have established clear boundaries for themselves, enabling the 3G technology market to flourish and grow at least as fast as Cisco predicts it will.

    To mark the occasion this morning, Broadcom announced it would create a $50 million fund to support math and science educational programs around the world. Consider this the ceremonial tree planting, in a midst of a burnt and ravaged forest.

    Copyright Betanews, Inc. 2009


  • Lien pour 'BetaNews.Com/2009/04/26/Confirmed__Windows_7_RC_to_the_public_on_May_5'

    Confirmed: Windows 7 RC to the public on May 5

    Publié: avril 26, 2009, 12:52am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Leaving not much time for folks to stew in the rumors over the latest "leaked" builds (plural) numbered 7100 of the Windows 7 release candidate -- one of which may have been legitimate -- Microsoft decided late Friday night to officially confirm that May 5 is the official public release date for the Win7 RC.

    "I'm pleased to share that the RC is on track for April 30th for download by MSDN and TechNet subscribers. Broader, public availability will begin on May 5th," wrote Microsoft's Brandon LeBlanc in a corporate blog post late yesterday.

    News of the public release comes as disseminators of the real Build 7100 discovered the existence of a virtualization envelope that may enable at least the Ultimate SKU of Windows 7 to run applications meant for older versions of Windows. This may be how the company executes its transition plan to a fully 64-bit platform.

    Copyright Betanews, Inc. 2009


  • Lien pour 'BetaNews.Com/2009/04/25/_Deep_packet_inspection__could_become_the_target_of_legislation'

    'Deep packet inspection' could become the target of legislation

    Publié: avril 25, 2009, 12:14am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    The two biggest threats to Internet users' privacy, from the point of view of Rep. Rick Boucher (D - Va.), come from behavioral advertising technology and from deep packet inspection (DPI) -- the ability for an ISP to scan the contents of IP packets, and make determinations as to their handling based on those contents. But the specter of another company using both of these technologies together, like liquid hydrogen and liquid oxygen, spelled out a more explosive danger. Chairing hearings of the House Subcommittee on Communications, Technology, and the Internet yesterday, Rep. Boucher made that clear:

    Congressman Rick Boucher (D - Va.)"What services that consumers consider essential to the safe and efficient functioning of the Internet are advanced by DPI?" asked Boucher during his opening remarks yesterday. "Since the death of NebuAd's DPI-based behavioral advertising service last year, are other companies using DPI to deliver behavioral advertising? What, if any, safeguards are in place to ensure that consumers are giving meaningful consent to the tracking of their activities on the Internet?"

    The nation's broadband providers would like to be able to use DPI as a method for implementing traffic control, especially for narrowing the bandwidth allowed for applications such as BitTorrent. In instances where they're involved in programming and content services, they'd also like to at least not be barred from implementing behavioral advertising, perhaps as a way of checking which clips viewers are watching online and targeting ads to parallel those clips.

    But both weapons in the arsenal of the same companies could spell disaster, which is why NCTA President and CEO Kyle McSlarrow tread very carefully during his prepared opening remarks yesterday, acknowledging the existence of both but only exclusively and individually.

    "Packet inspection serves a number of pro-consumer purposes," read McSlarrow (PDF available here). "First, it can be used to detect and prevent spam and malware, and protect subscribers against invasions of their home computers. It can identify packets that contain viruses or worms that will trigger denial of service attacks; and it can proactively prevent so-called Trojan horse infections from opening a user's PC to hackers and surreptitiously transmitting identity information to the sender of the virus. Packet inspection can also be used to help prevent phishing attacks from malicious e-mails that promote fake bank sites and other sites. And it can be used to prevent hackers from using infected customers' PCs as 'proxies,' a technique used by criminals, in which user PCs are taken over and used as jumping-off points to access the Internet, while the traffic appears to be generated by the subscriber's PC. As a result, the technology can be used in spam filters and firewalls."

    Never mind, for the moment, that the whole concept of proxies was relegated to the realm of the malicious user. For Georgetown professor and Electronic Privacy Information Center Executive Director Mark Rotenberg, even if ISPs use DPI responsibly and not in concert with behavioral ad targeting, that doesn't make it right. From his perspective, breaching privacy bounds in the name of traffic control simply isn't ethical.

    "In the communications context, service providers and their businesses partners also have an obligation not to intercept the content of a communication except for the purpose of providing the service, to comply with a court order or other similar legal obligation," read Rotenberg's prepared testimony (PDF available here). "It is possible that the techniques being developed by these firms may help in some ways to safeguard privacy if they are robust, scalable and shown to provably prevent the identification of Internet users. But the essential problem is that they simply do not have the right to access communications traffic for this purpose. Also, I would not recommend that you alter current law or enable consent schemes to make this permissible."

    Though no new bill has been drafted, Rep. Boucher said up front it's his intent to draft one this year. He told the Subcommittee Thursday, "It's my intention for the Subcommittee this year to develop legislation extending to Internet users that assurance that their online experience is more secure. We see this measure as a driver of greater levels of Internet uses such as e-commerce, not as a hindrance to them."

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/24/Nvidia_s_licensing_situation_with_AMD_is_just_as_bad_as_with_Intel'

    Nvidia's licensing situation with AMD is just as bad as with Intel

    Publié: avril 24, 2009, 12:45am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    During yesterday's unveiling of its accelerated roadmap for 12- and even 16-core processors, an AMD executive said he did not believe the licensing situation between his company and Nvidia would enable Nvidia to produce chipsets that support future AMD platforms. Specifically, it appears Nvidia is not yet licensed to produce motherboard chipsets that support AMD's next-generation processors, reducing the likelihood for multi-GPU SLI support for AMD's "Istanbul" and future generations.

    "For 2010 moving forward, the solutions coming out from AMD will be AMD and on AMD at this time," stated server business unit vice president Pat Patla. "We don't expect to see new chipsets from Nvidia or Broadcom for server implementations in 2010. But they will continue to support all existing platforms moving forward through 2010."

    Anyone spreading the rumor that Nvidia is looking to invest in Via Technologies may be thinking Nvidia could use a friend -- any friend -- about now. Its ability to produce chipsets for Intel's Nehalem platform remains on hold, perhaps permanently now that it has countersued Intel over its rights to say it supports Nehalem in interviews to the press. Perhaps Patla's statement that Nvidia will "support existing platforms" can be interpreted as the closest thing to an olive branch it's going to get, especially from the CPU maker that now owns its principal competitor.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/24/The_plan_to_get_AMD_Opteron_back_in_sync'

    The plan to get AMD Opteron back in sync

    Publié: avril 24, 2009, 12:25am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Two years ago, after AMD promised to deliver the best performing CPU to data centers, its Barcelona architecture found the company trying to explain to customers why they shouldn't want performance, in an explanation that looked just as embarrassing as it sounded.

    For AMD's last quarter, it actually managed to heal some of the ill effects of the negative economy on its desktop and mobile CPU segments, but not yet in the data center. Server CPU revenue is still hurting, though the company now declines to provide a specific breakdown. The way back for the company, it believes, is to create a marketing position that's similar to where it was in 2006, where system builders and partners started perceiving AMD as "one-upping" Intel.

    For that reason yesterday, the company unveiled something it's calling Direct Connect Architecture 2.0, an upgrade to the way its processor cores are directly linked to memory by means of the HyperTransport bus. It's this architecture which will enable a 12-core processor to enter production as soon as next year. But to make sure next year happens on time, the company is moving up the availability date for its "Istanbul" architecture -- hopefully a much happier place for AMD -- from what some had feared to be this fall, to next month.

    "This Istanbul native six-core product is once again going to be the world's first native Direct Connect Architecture-based product," said AMD server business unit VP Pat Patla yesterday, at times fumbling for the right superlatives. "And just like we've done with all of our past introductions, this is a six-core product that's going to be available for the two-socketed systems, four-socketed systems, and eight-socketed systems that are available and in market today."

    A tactic we've seen before with AMD, and which we're seeing again to no one's surprise, is to create one generation of processors which slips into the motherboards of its forebear with no problem. The next generation, then, will require a platform shift. In this case, Istanbul will slip into systems with Barcelona-era CPUs, and AMD can't be too eager for customers to do just that.

    "With Istanbul, we'll be bringing out 30% more performance into that same [Barcelona] thermal range, as we launch this product in June," said Patla.

    AMD 4/22/09 platform update slide [Courtesy AMD Corp.]

    From there, the company can resume its focus on kicking out the current generation, and moving toward the next one. Currently, Opteron processors have two clustering categories as designated by their numbers: the 2300 series for two-way systems (two processors, for eight cores) and the 8300 series for four- and eight-way systems (4P and 8P). Beginning next year, AMD will unveil two new architecture series with different numbering: The 4000 series ("San Marino") will play to 2P servers, and the 6000 ("Maranello") to 4Ps, including its first eight-core line, called "Magny-Cours" but pronounced "many-core." Uh, gentlemen...did you forget something? Having been burned once before by the lack of a Barcelona part that could sustain 3.0 GHz, you can't help but be too careful about what AMD omits these days.

    "The four-way market's been compressing for years because of the multi-core capabilities, because of the throughput we're bringing into that space. There's been a little compression of the high-end two-way market and the four-way market, starting to converge a little bit, for the areas of server consolidation, for the areas of virtualization," reported Patla yesterday. Though he might have benefitted from a trimming of verbiage, his point was that virtualization has been driving up utilization rates for CPU cores, to the extent where it's becoming more practical to run four-way quad-cores than not only eight-way dual-cores but even eight-way quad-cores.

    When asked directly yesterday why 8Ps weren't mentioned on the Magny-Cours roadmap, Patla responded, "At this time, we think [because of] the thread count and the server density, the Magny-Cours product in the 6000 series will be aimed at the four-socket and the two-socket space moving forward."

    The first 45 nm Magny-Cours products are being sampled now, AMD executives said yesterday. The 32 nm drop-in replacements for Magny-Cours and "Lisbon" on the 4000-series side will come in 2011. But beyond that, AMD says it's planning what it describes as a completely new generation of x86 architecture; and 2012 would be just about the right timeframe for allies like IBM to unveil their master scheme: a 28 nm part made with simple fine-tuning to 32 nm processes.

    AMD 4/22/09 platform update slide [Courtesy AMD Corp.]

    Up until that point, AMD is promising about 30% performance gains with each new generation, meaning 30% better performance for Istanbul over Barcelona, 30% more for Magny-Cours over Istanbul, and at least 30% more again, if not more, for the 16-core DCA 2.0 architecture it's calling "Interlagos" for 2011. Likewise, similar shifts will be seen in the 4000 series, with plans to implement its eight-core "Valencia" architecture in 2011.

    Nevertheless, AMD is omitting any hint of its frequency numbers for all its new parts, although this time it's prudently avoiding the step it took last time of claiming frequency was unimportant and attempting to explain why. This time, Patla advised reporters attending yesterday's announcement event to "pay more attention to the performance jumps." Indeed, those jumps will be noteworthy, but while AMD talks about 30% per generation, Intel is in the midst of implementing another seismic shift with Core i7. Making this new game plan work means betting that Intel's shift in its timing reflects the stumbling of the giant. If Xeon continues its gains in the server market, neither Istanbul or Valencia or anyplace else on the map may end up any more gratifying than Barcelona.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/23/For_the_desktop__AMD_covets_the_budget_enthusiast_with_3.2_GHz_quad_core'

    For the desktop, AMD covets the budget enthusiast with 3.2 GHz quad-core

    Publié: avril 23, 2009, 9:48pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    If you've ever had the pleasure of owning a Nissan Z car (I've owned two in my lifetime), you understand the extra feeling of confidence you get from still being able to afford your house, your clothes, and food. They're very solid performers, they look presentable in a crowd full of Porsches and BMWs, and yet their owners are conscientious folk who can also maintain a budget.

    Every time I tell the fellows at AMD that I've been a Z owner, they shout back at me, "Well then, you know what we're talking about!" They're hoping that there's a certain niche of enthusiast system builders who aren't all that interested in displaying the measurements of their disposable income in public. For them, on time, AMD released its next version of sensible high-performance: the Phenom II X4 955 Black Edition CPU.

    The "Black" theme is to give the buyer an image of high-class. It's also perhaps to dim the lights a bit on the whole theme of competition, which is usually what enthusiasts like to do most with their systems. Despite the fact that AMD is now comfortable with clocking its Shanghai series processors above 3.0 GHz -- the 955 is set at 3.2 GHz, and AMD encourages overclocking -- its principal competition from Intel is the sixth CPU down on its most recent price list, well within the upper-middle-class of its product range, and not nearly its best competitor.

    At $245 for 1,000-unit quantities (street prices may be higher, though Newegg.com isn't charging a nickel more), the quad-core 955 is priced well below Intel's current $999 premium (in 1K units) for its Core i7 965. AMD is very happy to remind you that you're paying one-fourth the price. But in Tom's Hardware tests published today, the 965 performs generally better -- for example, about 11% better in the 3DMark overall score, and nearly 19% better in the PCMark suite score. Still, that's not four times the performance; and anyone whose Z car has been beaten in a drag-race by a Maserati, but not by a full body length, knows what it's like to stand toe-to-toe with legends and not feel ashamed.

    AMD Phenom II X4 CPU set against Phenom II wafer

    Today, the Phenom II X4 955 takes its place at the top of AMD's Dragon platform, which features its 7-series chipsets and ATI Radeon HD 4890 graphics card. When AMD premiered the Phenom II X4 series back in January, its goal was to keep its best performing platform components under $1,000. If you're investing in a Core i7 platform, a buyer's component prices put together may end up doubling his investment in the Dragon platform. And AMD's move today knocks $40 off the price of the same 2.8 GHz model 920 CPU it introduced in January.

    But today's entry adds a new wrinkle to the equation: The 955 uses AMD's new Socket AM3. This means you can purchase a 955 and drop it into a motherboard to replace the 920, or any other Socket AM2+ CPU. But despite the way things sounded back in September 2007 when AMD announced Socket AM3, you cannot drop a Socket AM2+ CPU into a Socket AM3 motherboard. AMD issued a nasty little warning about this earlier this month.

    So what would you really be spending to take full advantage of Socket AM3? Checking today's prices at Newegg.com, we were able to put together a system that includes the 955 Black Edition, Asus' M4A78-E motherboard with AMD's 790GX chipset, Asus' version of the Radeon HD 4890 card with 1 GB of GDDR5 memory, a pair of Corsair 4 GB DDR2 memory modules (the Dragon platform does not yet use DDR3), and a Seagate Barracuda 7200.11 1.5 TB hard drive. That gave us a subtotal of $879.95, which doesn't give us much breathing room for extras such as power supply, case, lights, cooling, and cabling if our aim is to stay under $1,000.

    On the other hand, what would our price be for an Intel system that's comparable in performance to the new Black Edition -- not Intel's premium CPU, but something that performs about as well as the 955. Tom's Hardware tests reveal that the closest performer in the Intel category is the 2.83 GHz Core 2 Quad Q9550, which Intel sells for $266 (Newegg's markup isn't that high at $269.99). And yes, you read right, Intel's 2.83 GHz model is matched with AMD's 3.2 GHz model.

    Right now, Newegg is selling an Asus motherboard with Intel's P45 chipset, for $99 after rebate. That more than compensates for the CPU price premium, since with this motherboard we can still use DDR2 memory. I can keep the other Dragon components and still save ten bucks, paying $869.94.

    At that point, the question becomes whether I want to go up against Maseratis and lose by a length, or go up against a Chevy Caprice Classic and get beaten by a nose. AMD's new platform components may have the "Black Edition" theme, but until they catch up to Intel in the architecture department, the source of that blackness may be smoke.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/23/More_speed_to_come_from_the_first_Firefox_3.6_alpha'

    More speed to come from the first Firefox 3.6 alpha

    Publié: avril 23, 2009, 6:06pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    While awaiting the first public (non-nightly build) copy of Firefox 3.5 Beta 4, we noticed this week the first nightly alpha build of the Mozilla browser to come afterward: the first 3.6 Alpha 1 builds. In Betanews initial performance tests of some of Mozilla's very latest code, there's a lot of room for encouragement: The latest code-name "Minefield" build posted 11.7% better performance overall than the last code-frozen nightly build of Firefox 3.5 Beta 4, and 232% the overall performance of the latest Firefox 3.0.9, released just yesterday.

    Firefox 3.6 Alpha 1 posts the highest Acid3 test score for Mozilla.Our tests pit the latest Windows-based Web browsers in a virtual Vista system, and combine the Acid3 standards test with three trusted performance tests for CSS rendering and JavaScript speed. Nearly all the early news for the 3.6 alpha was good, including posting Mozilla's best-ever score on the Acid3 test -- a 94% -- and posting a Betanews cumulative index score for the first time above 10.0, which means this alpha performs over ten times better than Microsoft Internet Explorer 7 (not the current version, IE8, but the previous one).

    This news comes as we also noted that Google's latest bug-fix update build 172.8 for its Chrome 2 series browser (Chrome 1 is the release version, Chrome 2 is the beta) slowed down over version 172.6. Still quite fast, but now at an index score of 13.07, Chrome may see something coming up in its rearview mirror, as the latest Minefield score of 10.14 pulls Mozilla's browser to within the 25% range of Google's speed. With Google's release build not yet scoring a 12, it's conceivable that the final Firefox 3.6 could pull even with Chrome 1 while Google keeps working on tuning Chrome 2.

    Comparative test scores for browser performance show Firefox 3.6 gaining, Google Chrome 2 receding a bit.

    It's not so good news for users of the release builds of Firefox 3, as the latest 3.0.9 actually slowed down a bit overall. While 3.0.8 had scored a 4.7, 3.0.9 only scores a 4.37.

    The new time-based history deletion dialog in Firefox 3.6 Alpha 1.Once we stop tinkering with the accelerator pedals for a minute or two, we might get a chance to play with some of 3.6's new features. We did stumble across one of them right away: When clearing your history and the contents of the browser cache, you can now specify generally how much time you want to wipe clean -- the last hour, two hours, four hours, today, or everything in the cache. Now, at the moment, this comes at the expense of specifying what you want to wipe clean -- history, cache, cookies, passwords -- but perhaps the team is looking for a way to integrate those choices into the new setup.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/21/AMD__Six_core_Istanbul_server_CPUs_moved_up_to_May'

    AMD: Six-core Istanbul server CPUs moved up to May

    Publié: avril 21, 2009, 11:31pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    During the early part of this afternoon's conference call with analysts this afternoon, AMD CEO Dirk Meyer told analysts -- one day ahead of a momentous product call scheduled for tomorrow afternoon -- that strong reception and testing for its Istanbul-architecture server CPUs will enable the company to start orders for its first six-core products next month. This will enable shipments of six-core systems from suppliers as soon as June, said Meyer.

    This despite a continuing, if somewhat diminished, loss for the first quarter of the year of $416 million, on revenue that was 21% lower annually. The server side of the business, Meyer admitted twice, was something of a downer for the quarter, while sales of CPUs and graphics chips in the desktop and mobile segments rose to compensate. The company continues to be cautious about its outlook, and disputes Intel's claim earlier in the week that the fallout in the technology industry had hit bottom.

    Update banner (stretched)7:00 pm EDT April 21, 2009 - Tomorrow afternoon, AMD will announce significant accelerations in its technology roadmap, one portion of which will be an early introduction of the company's 45 nm six-core server processor, code-named Istanbul. This news came late this afternoon from CEO Dirk Meyer, during his quarterly conference call.

    "I am pleased to announce that, because of our strong engineering execution, we are pulling in revenue shipments of Istanbul into May, for system availability in June," Meyer told analysts. When Istanbul was originally announced in March 2008, it had been scheduled for sometime within the second half of this year. That move comes on the heels of Intel's announcement last week that it was moving up the shipment dates of its 32 nm Westmere processors, but also extending its window of availability out further, changing Intel's notorious "tick-tock" timing. AMD lost some ground in the server CPU department last quarter, though its executives deny it lost market share to Intel -- they contend it's a bad market for enterprises all around.

    "Server was definitely a weak spot," noted AMD CFO Bob Rivet this afternoon. "Spending has been clamped down pretty hard, so server was actually a decline quarter-on-quarter. Strongest growth was in notebooks, then desktops were reasonable but still grew positively quarter-on-quarter."

    Revenue from CPU sales overall was still down annually by 21%, but up 7% over the disastrous fourth quarter of last year, to $938 million. Graphics processor sales declined 15% annually to $222 million, in the only market segment where average selling prices (ASPs) actually rose a bit. But with costs out of the way, even with lower prices, gross margins bounced back a full twenty points to 43%, besting AMD's target of 40%. That made this last quarter's loss not nearly as dreadful as the holiday economic storm. (AMD's numbers incorporate the initial operating figures for Global Foundries, its spinoff that now produces chips for AMD.)

    "A bunch of moving parts," as Meyer characterized it. While server ASPs were up sequentially, he stated, "On the desktop side, ASPs were flat to actually up a little bit; on the notebook side, ASPs were down, mostly driven by a mix shift in the marketplace towards lower-end machines. In addition, we walked into the quarter with a pretty big inventory position, so we clearly found opportunities to move inventory, which was also part of what drove the quarter-to-quarter ASP decrease. Finally, [when we] stand back and look at it, our server business was down a little bit quarter-on-quarter, while both the desktop and the notebook businesses were up quarter-on-quarter, which affects the overall ASP and moves it down."

    So is it over, as Intel indicated it was? Meyer wasn't ready to go that far, telling one analyst, "I would say that the inventory correction is largely behind us, and what we're left with is just uncertainty about end user demand, with seasonal patterns and the overarching economic question playing off against each other.

    "What I would characterize the rest of the year as, [is] one of a technology leapfrog," Meyer told another analyst. "Intel came off their [analyst call] and announced the availability of Nehalem. Of course, that's going to take quarters to ramp across the marketplace. Meanwhile...Shanghai, the quad-core Opteron part, plays extremely well and offers great value, particularly against highly scalable systems and high-density cloud computing installations." While Intel's tack last week was to project a fuzzy picture of the future as though it were clearer and upbeat, AMD's strategy today was not to project much of a picture at all, but to stay upbeat nonetheless.

    "We are executing well on every major element of our strategy," stated CFO Rivet, "from launching Global Foundries to reducing our cost structure to delivering a growing portfolio of platforms tuned to today's increasingly value-conscious end user mindset...We entered 2009 a very different company than the one you were following as recently as a year ago. AMD, the product company [as opposed to Global Foundries] is a much nimbler operation, right-sized to respond to today's economic uncertainty, as well as the dynamic demands of our world-class global customer base."

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/21/MySQL_5.4_gets_bigger_anyway__encroaching_on_new_parent_Oracle_s_turf'

    MySQL 5.4 gets bigger anyway, encroaching on new parent Oracle's turf

    Publié: avril 21, 2009, 10:17pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    When Oracle CEO Larry Ellison announced his acquisition of Sun Microsystems yesterday morning, he didn't mention MySQL at all -- his company's principal competitor in the small systems database space. Maybe that was just for spite: It's no secret that Ellison wanted MySQL; he said so explicitly three years ago. It was one of the key missing elements in the top-to-bottom stack he's been looking for, a way to create a line-up of pre-configured systems with everything customers need right out of the proverbial "box."

    But MySQL's place in Ellison's stack doesn't extend to the enterprise, where the Oracle DB still rules -- at least in his mind. Eleven million installed MySQL customers plus a resurgent Microsoft SQL Server aside, Oracle DB is, from Oracle's perspective, an unstoppable juggernaut.

    While participants in this week's MySQL Conference and Expo in Santa Clara were debating the meaning of changing the flags over the front entrance once again (it was acquired by Sun only last year), the community for the world's principal open source database maintained the course it had set last week. Today, the group heralded the official release of MySQL 5.4, whose principal improvement is bigger and better support for the InnoDB transactional storage engine. That engine will help MySQL enter more enterprises by removing version 5.1's limitation of four cores per instance, moving all the way to 16-processor ("16-way") support for x86 servers with multiple cores per processor, and 64-way support for Sun's SPARC-based CMT servers.

    If you remember the days when "Toyota Truck" was an oxymoron in the heavy load division, you know how it feels when barriers are shattered. This puts MySQL into the heavy load category, which isn't exactly inside the boundaries of Larry Ellison's nice little stack.

    But Ellison is rarely without an ace up his sleeve, or at least an ace somewhere handy; and in this case, he made sure he had one back in October 2005. That's when Oracle purchased Innobase OY, the makers of the InnoDB database engine. See, MySQL is officially a database management system, which means it's quite capable of managing data stored by other open source engines. While MyISAM is the one designed for MySQL and intended to work with it by default, Innobase developed InnoDB not just for MySQL, but as an open source engine for transactional data. It's through the expansion of MySQL's support for InnoDB that version 5.4's embrace of 16-way servers has come about. Understanding how this particular innovation got started requires us to review a little bit about the ISAM methodology -- specifically, why it's been such a lucky charm for MySQL, up until the point where it needs to expand into the enterprise. For more on that, I'll cite...well, myself, from a textbook I wrote in 1998:

    ISAM [Indexed Sequential Access Method] is not another trademark, nor does it represent some proprietary technology invented just for the sake of the cute acronym. Instead, it refers to a technique for locating an entry in a database table. In short, an ISAM driver or server uses a separate table called the index to look up a key number for a record. A key number is a unique entry used to identify that record, such as a serial number or purchase order number. Having found that, the index then points the server in the direction of the true record in the database, thus saving some search time.

    ISAM relies on a couple of conditions being met before it can work properly:

    • No two records in a table may be identical to one another. If you think about it, no properly conceived database table would have any need for identical records. Even if your table were a catalog of baseball cards and a given collection contained two identical cards, both cards should be given unique identifiers, making their respective records unique.
    • At least one column of the database table must contain fields whose contents are unique for each record. Generally, a serial number qualifies as such a column. This column serves to contain the key field that uniquely identifies each record.

    For ISAM, at least one separate table is generated for each field column. This table is the index for the database table. It contains two and only two columns: a duplicate of the key field column, and a separate column recording the location of the record in the table whose key field matches the duplicate in the index. The theory here is that because the index table is smaller, it's quicker to search through it than through the main table. But generally, ISAM drivers "cheat" and sort the index column, then employ a binary search instead of a sequential search...which is far faster. So why isn't it called "IBAM" rather than ISAM? Sometimes it's just too difficult to ditch a cute acronym.

    Next: MySQL's road around ISAM leads to Oracle, but not the way it planned...

    Fast-forward eleven years: MyISAM isn't really all that sequential either, which is why it's so fast. But because it's still algorithmic search grafted onto the sequential model, it's not very well adapted to distributed processing. That's okay if you're dealing with a Web-based server where text searches, for example, are executed from one location -- MyISAM is very well suited for that. But for accounting purposes especially, a database architect needs a way for multiple tables to appear to be updated simultaneously, as well as for those updates to be rolled back simultaneously in case of an error.

    A common example for a transactional database involves a system where there are multiple bank accounts. If money is transferred from one account to another, from a sequential standpoint -- the "S" in ISAM -- that's two queries: one withdrawal and one credit. But should the power go out on the server in-between those two queries, the withdrawal might disappear. (That's the secret reason why early database writers performed the credit first.) With a transactional database engine, although both queries are written as though they were sequential, the update appears to take place in parallel. In case of an error, the update doesn't appear to have taken place at all, which is better than the alternative.

    InnoDB is a transactional engine; MySQL developers can create tables "in" InnoDB instead of "in" MyISAM. When they do, the texture of their programs changes somewhat to take advantage of the transactional model. And in some cases, searches actually slow down because InnoDB has a larger overhead. But the payoff comes in the form of fuller functionality and greater reliability -- and Larry Ellison knew three years ago that such a payoff would be necessary for MySQL's evolution. He said so explicitly, and that's why he purchased Innobase OY.

    Now the MySQL community marches forward, with what was a plan to take on Oracle head-on by supporting a tool...owned by Oracle. Now it's a plan to make a bigger place for itself than just a slot in the Ellison stack.

    The early numbers for MySQL are dramatic, with easily measurable performance improvements of about 56% in workload tests, and as much as triple the performance in distributed connectivity tests, thanks to that 16-way support. In a December 2008 test of a MySQL 5.4 beta by Sun's Robin Schumacher, the new engine's capacity for subquery optimization -- a smarter way to break down queries nested within queries -- led to speed improvements in subquery benchmarks of as much as 40,000%. That's not a typo; that comma is in the right place.

    As MySQL senior software architect Mikael Ronstrom reports on his blog today, "We have consistently seen improvements in the order of 30-40% of sysbench top numbers and on large number of threads 5.4.0 drops much less in performance than 5.1. The new InnoDB Thread Concurrency patch makes the results on high number of threads even more impressive where the results have gone up by another 5-15% at the expense of 1% less on the top results (there are even some DBT2 runs that gave 200% improvement with the new algorithm)."

    Version 5.4 would have been released today had Larry Ellison never entered the picture; in which case, our headline might have read, "MySQL gets bigger: Now what will Oracle do?" If making MySQL bigger makes it better, Ellison's 2005 move may be his insurance policy.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/21/Firefox_3.0.9_is_publicly_available__announcement_to_come'

    Firefox 3.0.9 is publicly available, announcement to come

    Publié: avril 21, 2009, 6:26pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    After a weekend of stability testing, version 3.0.9 -- the latest security update to the Firefox 3.0 browser series -- can now be downloaded. As usual, Mozilla isn't making the new version's release public for at least another day, so if you select Check for Updates from the Help menu, you won't see the new version just yet, though you can download it from Fileforum and install it manually without problems.

    When the company releases its list of addressed security issues -- perhaps as soon as tomorrow -- expect a larger than normal list. Among the general bugs the organization is addressing is one we've experienced ourselves, especially since many of us use Firefox for communicating with our Betanews CMS: Submitting data content in large forms can sometimes be a real bear, and we've noticed this since version 3.0.7. This issue, among others, has apparently been addressed and fixed.

    Download Mozilla Firefox 3.0.9 for Windows from Fileforum now.

    Download Mozilla Firefox 3.0.9 for Linux from Fileforum now.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/21/Interview__Former_WSJ_publisher_Gordon_Crovitz_on_paying_for_online_news'

    Interview: Former WSJ publisher Gordon Crovitz on paying for online news

    Publié: avril 21, 2009, 5:38pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Interview

    For newspapers that have seen their advertising revenue -- especially in classifieds -- cut in half or worse by the rapid acceleration of Internet news as an alternative, publishers are faced with a situation where they must transform themselves in order to survive. Just over the last few days, we've learned that Gannett, publisher of USA Today and The New York Times Co. are posting losses for the last quarter at an annual rate of as much as one-fifth, on account of declining ad revenue. Some may not be able to sustain similar losses through the rest of the year, and the Times Co. is threatening the shutdown of the Boston Globe.

    Maybe newspaper publishers can save some form of their print media products, and maybe they can't; but in any event, they will need to find some way to make their online operations workable, because print alone will no longer sustain the newspaper business.

    There are some who feel the newspaper business isn't necessarily entitled to be sustained. On the opposite side of that issue are congressmen who are proposing legislation to grant newspapers non-profit status, lowering their tax rates as a way of keeping them alive. But newspapers have been threatened before, first by the onset of network radio news in the late 1930s and into World War II, and second by the rise to power of network TV in the 1960s. They adapted and thrived, and one veteran publisher believes it's time for a repeat performance. He is L. Gordon Crovitz, formerly Publisher of The Wall Street Journal and currently partner in JournalismOnline.com, a venture founded with publisher Steven Brill and investor Leo Hindery whose objective is to save print publishers by giving them a single portal for cultivating alternate revenue streams.

    Crovitz spoke at length with Betanews last Friday.

    Scott Fulton, Betanews: I'll start with what I think would be the most important question that any business entrepreneur would probably be asked by [your partner] Leo Hindery if they were to have a meeting together: How much do you believe, in this economy, an Internet news reader would be willing to pay for a legitimate online news service?

    JournalismOnline.com co-partner and former Wall Street Journal editor L. Gordon Crovitz.  [courtesy JournalismOnline.com]L. Gordon Crovitz, partner, JournalismOnline.com: A lot has changed since the commercial beginning of the Internet. Consumers now are very happy to pay for digital downloads of music, they pay for ringtones, when they're playing video games. They're happy to pay for virtual shields and virtual swords. So consumers, unlike the beginning of the early years of the Web, are quite willing to purchase digital access, digital content, digital entertainment. I remember in the first years of The Wall Street Journal Online, one of the barriers we had was, people did not want to conduct commerce on the Internet. And PayPal and services like that have created trusted digital transactions, and has also created a transaction business online -- much easier than it was ten years ago.

    Now, the situation for news publishers is pretty simple: Many news publishers, until very recently, thought that online advertising would continue to grow at 20 to 30% per year, and it's very clear that even though online advertising is a powerful driver of revenue, that in a cyclical downturn of the kind we're in now, advertising will not be as strong. And as a consequence, all the news publishers we've spoken to have said that they are looking at all of their revenue opportunity, including the prospects of charging some amount for access to content or other services.

    Scott Fulton: Isn't part of the problem there, with regard to revenue from advertising, the fact that unlike any other news medium up to now, this particular online model is dependent upon advertising that comes from a couple of shared platforms that all these news publishers collectively rely upon -- Platform-A, Google AdSense -- so that revenue tends to flatten out the more popular [a publisher] gets?

    Gordon Crovitz: Most news publishers derive the bulk of their online revenue from display advertising online, as opposed to some other category. And display advertising has been growing at the slowest rate, or even contracting. So that's become a much less reliable source of revenue, and [in cases] where it's the only source of revenue online, many publishers are finding it challenging to support a news staff.

    Scott Fulton: You mentioned earlier that there are other online businesses which have successfully tested the waters, which suggests that there is some disposable income out there. I think your point is, that if there are some customers that are willing to spend money on the use of virtual shields and swords, maybe they'd be willing to spend some money on hard evidence and hard facts. But I remember that prior to the launch of iTunes, there was a lot of research and concern about, what is the proper level to be charging for a song? And that 99¢ figure wasn't pulled out of the air, that was determined through serious research as to the nature of the relevant market. So I'm wondering whether a similar amount of research is being put into your endeavor? In other words, do we know how much other expendable income there is out there, to be spent and that should be spent and that isn't being spent, on online news products?

    Gordon Crovitz: Well, a couple of things: One, consumers do spend a lot of money to access news. They spend that money through print subscriptions, through cable [TV] subscriptions, and others. So people do pay to access news when asked. But the broader answer is, I think, the right model, the right price for one publisher won't be the right publisher won't be the right answer for another one. And part of our model is to allow publishers to set their own terms, their own price, their own products, because it will defer from one to another. What we'll be able to do is collect research and data on what access content services are successful and are popular among consumers, and be able to share that research with the affiliates that belong to our program.

    So you're right, in the case of news, I don't think it's going to be one-price-fits-all by any means. And the opportunities we're offering news publishers is to work with them, to help them determine the best model for their brands and content.

    Next: How does a publisher build an audience while charging that audience?

    How does a publisher build an audience while charging that audience?

    Scott Fulton, Betanews: I know your partner, Steven Brill, last November presented a memo to The New York Times, in support of going to an online subscription model...Mr. Brill's comment to the Times was essentially, "You've already done Step 1, you've built an audience. Now it's time to do Step 2, to monetize that audience." That seemed to suggest to me that the business model for online news, as presented at least to the Times, is first to build an audience, and then to charge it. Well, isn't that a little counter-productive? That seems to say to me that the only way to build an audience is to give away your product? I would think that you'd want to start selling...

    Gordon Crovitz, JournalismOnline.com: [I think the model Steven suggested] was quite constructive. The launch of the online Journal was presented as free, but there would be a subscription paid. Large audiences gathered, and over time, subscribers were much larger than the number of people who had accessed it for free.

    Scott Fulton: I want to cite a paragraph you wrote for a column in the Journal last February: "For years, publishers and editors have asked the wrong question: Will people pay to access my newspaper content on the Web? The right question is: What kind of journalism can my staff produce that is different and valuable enough that people will pay for it online?"

    When you were working more directly with Dow Jones, you were the founder of Factiva, [a subscription-only custom business news service]. And that would be an answer to that second question, but it's also an obvious customer-driven, very customer-centric service that produces superior news quality on request for individual customers. Does a news producer, in order to meet the criteria suggested by your second question, the "Right Question," have to be capable of being a Dow Jones, of being a Factiva -- that your staff must produce distinct quality?

    Gordon Crovitz: No, no, no. I think every news brand has got its own brand attributes, and there are certain things that its reader expects of it. It might be local news, it might be sports news, it might be some other kind of news. So I think that most, maybe all news publishers, for some fraction of the audience -- I'm not saying for everyone, but for some fraction of their audience -- they will have the ability to craft services that some percentage of their unique visitors will access.

    I find that a lot of news publishers don't track this, but what we expect will be the case is that news publishers will continue to provide a lot of content services without a fee. The opportunity is for some percentage of the unique visitors from us to generate subscription revenue. So the online Journal may be indicative, anyway -- there are 20 million monthly unique visitors to the online Journal, 1.1 million are paying subscribers. So about five percent of the total. So if people think of the online Journal as everything behind a paid wall, that's not accurate. It's very much a hybrid model, and I would expect that those publishers would pursue that kind of hybrid model.

    [ME's NOTE: In our story last week on the founding of JournalismOnline.com, I inadvertently transposed a few words and never caught myself in the copy edit. So for a while, I stated that Mr. Crovitz had announced his intentions to make The Wall Street Journal online free, when the word I meant to include was Murdoch, as in Crovitz' new boss at the time, Rupert Murdoch. My guest copy editor who finally located my error was one L. Gordon Crovitz, whom I thank very sincerely and to whom I also apologize for the transposition.]

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/20/Sun_goes_down__Larry_Ellison_disrupts_the_software_landscape_again'

    Sun goes down: Larry Ellison disrupts the software landscape again

    Publié: avril 20, 2009, 11:37pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Analysis

    There were two business models for the software industry, and now once again, there are two respective champions of those models: In one corner is the undisputed master of the "embrace and extend" principle, perceived worldwide as looking after itself and its own interests, while recently opening up its communications protocols to free licenses, supporting developers with free tools, and giving away the software needed for users to build its platform. In the other corner is a seasoned dealmaker, stalking after prey sometimes for years before trapping it into a deal it can't refuse, preaching the principle of openness while clearly and even transparently acquiring the components for a comprehensive platform where all roads lead through the company and into the company, not even hiding the fact that it rarely creates its own technology.

    Pop quiz: Which one's Microsoft and which one's Oracle?

    There is nothing the least bit secret or fuzzy about Oracle CEO Larry Ellison's business strategy. He is plain, simple, even brutal, but in recent years very effective. In a 2006 interview with the Financial Times Richard Waters, Ellison was asked point-blank whether he believed the open source business model would be disruptive to Oracle's plans.

    Point-blank question, point-blank answer: "No. If an open source product gets good enough, we'll simply take it...Once Apache got better than our own Web server, we threw it away and took Apache. So the great thing about open source is nobody owns it -- a company like Oracle is free to take it for nothing, include it in our products and charge for support, and that's what we'll do. So it is not disruptive at all --you have to find places to add value. Once open source gets good enough, competing with it would be insane."

    Waters' interview was ostensibly about whether Oracle would acquire Linux maker Red Hat -- something which Ellison most obviously considered for a time. Though Oracle ended up spitting out Red Hat after a taste, Ellison stated at the time -- once again, without hesitation -- that he was looking for something called a stack, a complete system that he could sell to customers ready-to-go, top-to-bottom. Red Hat may have been one way to acquire such a stack. But what would he do with Red Hat after he got it, Waters asked? The answer at the time was to fill the stack with the last component he'd require for a straight flush:

    "If I were running Red Hat, the first thing I'd do is bring in MySQL," Ellison told Waters, straight up. Then he pre-empted Waters' next question: "This is a two-edged sword: You further alienate IBM, you further alienate Oracle by doing all of this, but then you get your stack."

    Another reason why he'd go after MySQL? Simple: To keep IBM from doing the same thing.

    Like Babe Ruth pointing at the spot on the outfield wall where the ball would be streaking over, Ellison called his play before he made it. And while analysts today struggle with reconstructing some kind of near-term strategy that deals with the cloud, Ellison's swoop kept IBM from making a key play that would have given it a more competitive position against Oracle -- and Ellison's reasoning was probably pretty much that simple, just as he explained it three years ago.

    As our independent analyst Carmi Levy told us today, now Ellison can start grinding it in: "Hardware-software customization is a huge draw for enterprises looking for turnkey solutions that don't require extensive in-house tuning. Oracle puts itself in position to take on IBM -- which uses its own server lineup as the basis for tweaking database and business intelligence solutions -- and other vendors that have tightened their partnerships in this area in recent years," Levy told Betanews. "More tightly integrated offerings allow Oracle/Sun to target not only the enterprise market with higher performance offerings, but also the mid-sized enterprise space with more cost effective solutions that were previously the exclusive domain of larger shops."

    The deal also keeps IBM from getting Java -- which would have been a powerful combination -- while giving Ellison something to fuse with his Fusion middleware, which is Java-dependent anyway.

    "Java could potentially be the issue that either makes or breaks this deal," stated Levy. "It's easily the shining light that attracted Oracle to Sun in the first place. The question revolves around how willing and able Oracle will be to invest the resources necessary to bring integrated solutions to market. The acquisition theoretically points Oracle toward end-to-end domination of the Java space, but execution remains the major sticking point. It's too early to tell whether Oracle has what it takes to ensure Java's relevance going forward."

    Yet Oracle President Charles Phillips, in this morning's joint announcement, made it clear that his company does have a plan in mind for Java...even though the "Java space" to Oracle may not be the "Java space" as Sun had envisioned it:

    "Last week, we held our CIO advisory board [comprised of] our largest customers...and they applauded our move into database machines and storage vis-à-vis our recent Exadata announcement. But now they're asking us to step into a broader role by delivering a highly optimized stack from app to disk, based on standards. The general feedback was that they wanted more than standard components," stated Phillips. "They now want standardized deployments and configurations, and fully and consistently instrumented software and hardware to manage their systems, diagnose issues, and audit uses."

    Too much of Oracle's money is being spent on diagnosing software/hardware compatibility issues, Phillips went on, which would not be a problem if the software came pre-configured to start with. In other words, if Java would just find its proper place in the stack, everyone would be happy because costs would be structured lower.

    "With this acquisition, we can engineer a true system with consistency across all of these products...Oracle already has a successful embedded software business, with the number one embedded database in the world," said Phillips. "Java is embedded in over a billion devices. We will also now have the largest software development community in the industry, and...Java is also the platform for future applications."

    Larry Ellison wanted a stack, and now he's got one. And IBM doesn't -- at least, not this one. Typically, fuzzier company strategies with two or three possible outcomes make for more intriguing analysis stories. This time around, the handwriting was on the wall for several years, and this morning, the handwriting is on the dotted line.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/20/Microsoft__All_netbooks_will_run_any_Windows_7'

    Microsoft: All netbooks will run any Windows 7

    Publié: avril 20, 2009, 10:12pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    There will very likely be some netbooks shipped in the US and other developed markets this year that will feature the Windows 7 Starter Edition SKU announced in February. But this version will have some limitations to it that go beyond the inability to display the Aero front-end using Windows Presentation Foundation -- the direct implication of a statement made by a Microsoft spokesperson to Betanews this afternoon.

    But that will not mean that premium editions of Win7 will not be able to run on netbooks, the spokesperson continued, but rather that OEMs may end up choosing to pre-install this limited edition on netbooks for sale.

    "Any SKU of Windows 7 will be able to run on netbooks, which means that the hardware limitations of a netbook won't affect the functionality of Windows 7 regardless of SKU," the spokesperson told us. "With Windows 7, Microsoft is on track to have a smaller OS footprint, an improved user interface that should allow for faster boot-up and shut-down times, improved power management for enhanced battery life, enhanced media capabilities and increased reliability, stability and security."

    The Journal article suggested that one of the other limitations a Starter Edition user may be faced with is the ability to multitask only a limited number of applications simultaneously -- a feature, we pointed out to Microsoft's spokesperson, that would require a fairly sophisticated application of group policy and therefore, arguably, a more elaborate SKU of Windows than one that omits such a limitation altogether.

    The spokesperson would not deny the existence of this or any other specific limitation for Starter Edition, but went on to say that this edition should not be perceived as "defeated" or encumbered (agreeing with our contention that it would need to be elaborate to effectuate the limitation) because it enables customers to choose systems that may be better suited to their needs. Last February, the company announced that Starter Edition would be available in developed markets through retail channels, although Windows 7 Home Basic -- a version which will likely contain limited features -- will only be available in developing markets.

    "These engineering investments allow small notebook PCs to run any version of Windows 7, and allow customers complete flexibility to purchase a system which meets their needs," the spokesperson told us. "Small notebook PCs can run any version of Windows 7. For OEMs that build lower-cost small notebook PCs, Windows 7 Starter will now be available in developed markets at a lower cost. For the most enhanced, full-functioning Windows experience on small notebook PCs, however, consumers will want to go with Windows 7 Home Premium, which lets you get the most out of your digital media and easily connect with other PCs."

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/20/Now_an_Oracle_product__what_happens_to_MySQL_'

    Now an Oracle product, what happens to MySQL?

    Publié: avril 20, 2009, 7:03pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Attendees at the open source database's annual developers' conference in Santa Clara this morning are waking up to the incredible news that their own product, whose value to Sun Microsystems was to have been lauded by none other than Sun co-founder Andreas von Bechtolsheim in a keynote address scheduled for Thursday, is now owned by Oracle Systems.

    The initial value of MySQL to Oracle -- up until this morning, its biggest competitor -- was obvious by its absence from this morning's joint press conference featuring Sun and Oracle executives. Sun CEO Jonathan Schwartz mentioned MySQL along with OpenOffice as part of what he now calls the world's largest supplier for open source software. Until Oracle's SEC filings are made public, we won't know whether MySQL even factored into its valuation of Sun.

    LinuxQuestions.org editor Jeremy had, well, Linux questions this morning after the news was announced: "With much of Sun's revenue coming from hardware, will [Oracle] spin that division off or use it to focus more on a complete Oracle stack, that includes everything from hardware to database?" Jeremy wrote. "Moving to the individual parts of that stack, will Oracle continue with the SPARC CPU line or be interested in the more commodity x86 lines? At the OS level, will Oracle continue to focus on Linux and their Unbreakable implementation or will they attempt to keep Solaris alive? Oracle has been contributing to Linux in a significant way recently, and it would be a huge loss for that to go away."

    Independent analyst and Betanews contributor Carmi Levy believes the deal could enable some intriguing opportunities for Oracle, which up to now has had more difficulty breaking into the lower end of the database market. There, MySQL rules among open source users, and Microsoft SQL Server has had a stronghold among the commercial set.

    "This thinking extends into the lower end of the market as well, given how the Sun acquisition gives Oracle access to MySQL," Levy told Betanews. "While no one could ever rightfully claim that MySQL threatens Oracle's higher-end database offerings, its addition to the portfolio gives Oracle additional leverage in a market with significant growth potential. The MySQL installed base of approximately 11 million gives Oracle sales teams fertile opportunity to have conversations they haven't previously had."

    But MySQL's support base is comprised in large part by independent developers, and that's by design. Already, those independent developers are waking up to a new world, including software engineer Ryan Thiessen. An 11-year MySQL veteran, Thiessen is scheduled to speak at the MySQL Conference this week; and in a blog post this morning entitled simply, "Stunned," he reveals his bewilderment:

    "Last time this year I was cautiously optimistic about Sun's purchase of MySQL. But not this year -- it's fear and disappointment over what this means for MySQL," Thiessen wrote. "When I read this as a rumor a few weeks ago I thought it was a joke of an idea. Why would a high margin software company want to buy a declining hardware business, even if that hardware is great? As for their software, I cannot imagine that Oracle is interested in Java, MySQL, etc as revenue generating products, it would just be a tiny blip for them." Surprisingly, Java and Solaris were mentioned by Oracle CEO Larry Ellison as the key motivating factors, not the SPARC business -- in fact, it was SPARC that failed to generate a blip. MySQL got at least that much -- this for a business that was worth at least a billion to Sun just 15 months ago.

    MySQL's founders have remained on the record as fiercely against the use of software patents, as detrimental to the spirit and ethics of open source. Oracle is not diametrically opposed to that line of thinking, having made statements in principle throughout this decade opposing the creation of patent portfolios for predatory purposes.

    Oracle's 2000 statement on the issue, which is essentially unchanged, reads, "Patent law provides to inventors an exclusive right to new technology in return for publication of the technology. This is not appropriate for industries such as software development in which innovations occur rapidly, can be made without a substantial capital investment, and tend to be creative combinations of previously-known techniques."

    But Oracle does support the use of patents for defensive purposes, particularly when a company is attacked by a company with a big portfolio. That fact alone does not mean Oracle can't, or hasn't, used its software assets very aggressively. In October 2005, the company acquired its first widely used open source database component: Innobase, whose InnoDB contained enterprise-class features that were actually rolled into MySQL 5.0. By acquiring InnoDB, Oracle ended up owning a part of MySQL anyway, in a move that InfoWorld's Neil McAlister astutely reasoned may be to keep the lower-class database snugly in the lower class, while siphoning customers into Oracle's upper class.

    "That's why when Oracle snapped up Innobase in early October it was easy to interpret the move as a major offensive on Oracle's part," McAlister wrote then. "By taking control of one of MySQL's vital internal organs, Oracle gains the power to crush the upstart at a whim, simply by closing its grip around Innobase. But, seriously, why would Oracle do that?"

    Four years later, we have a closer glimpse of an answer to McAlister's question: By taking control of the geography of enterprise databases over a larger area, Oracle maintains MySQL safely within its own continent, either locked away or funneling new customers across the channel. Maybe no one could ever rightly claim that MySQL was a genuine threat, but today, Oracle's move ensures that it never can be. And that's the new world that developers in Santa Clara are waking up to.

    Copyright Betanews, Inc. 2009

  • Lien pour 'BetaNews.Com/2009/04/17/RIM_finally_distributes_BlackBerry_System_4.5__enables_Pandora'

    RIM finally distributes BlackBerry System 4.5, enables Pandora

    Publié: avril 17, 2009, 8:49pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    The real reason people started buying Windows 3.0 wasn't really because of the wealth of new software made for Windows for the first time. Seriously, that wasn't the reason. By the time people learned about stuff like Lotus 1-2-3G and WordPerfect for Windows -- which were both going to change the world, if you'll recall -- they were already sold on Windows 3.0 for another reason: the smooth on-screen fonts. Because let's face it, Windows/386 looked like it belonged on an 8-bit computer, compared to Macintosh.

    Late last night, the BlackBerry System 4.5 upgrade finally came through for users of those older-style units that actually look like BlackBerrys. In it, you'll find relief...in the form of the replacement of the thing that made the 8800s and older units look pale compared to the (slow) Storm, or the iPhone: the disgusting looking default system font.

    BlackBerry System 4.5 running on an 8800 World Edition handset.Okay, so the font file itself is still in the system, but you won't actually use it or want it. The old system fonts looked like something spat out of a Centronics dot-matrix printer, circa 1978. I remember selling dot-matrix printers in the early days, including one of the first to offer a switch that converted you from "sans-serif" to "serif," or in that particular case, from "legible" to "illegible." Until today, we 8800 users had something called "BBClarity" (which at least meant, devoid of junk) and "BBMilbank," which looked like it belonged on one of those programmable highway warning signs, shouting, "BEWARE OF ZOMBIES."

    The new fonts in System 4.5 -- BBAlphaSans and BBAlphaSerif -- are both pleasant, legible, and non-offensive. Most importantly, they actually enable the use of some applications that have been either available for multiple BlackBerry models, or waiting around until someone finally gave the word. BlackBerry System 4.5 running on an 8800 World Edition handset.One of those apps is the mobile edition of Pandora, the original programmable radio stream that learns your musical tastes as you listen. Having Pandora in my pocket is reason alone to own a mobile handset; my friend Angela can have her YouTube, thank you, I'll stick with my own channel of music made by musicians and not machines.

    BlackBerry System 4.5 running on an 8800 World Edition handset.The Mobile Pandora isn't as conversational as the PC edition -- for instance, you can't go into your profile and load up all your bookmarks. You can get an explanation why you're hearing the song you're hearing, and this little feature alone shows you why the System upgrade was necessary -- on the old system, there's no way this information would be the least bit legible in a single alert box.

    But this version appears to have been built with the understanding that Pandora users will most likely use their PCs to program their personal stations, not their BlackBerrys. And that's fine, because while we're working out in the gym or riding our bicycles or tuning out the noise of something else purporting to be music, you don't really have that much time to go poking buttons.

    BlackBerry users take note: You shouldn't try to upgrade your systems using your BlackBerry Desktop Software for Windows until you've upgraded that too. The only way you can move up from version 4.2.x to 4.5 safely is to use Internet Explorer (not Firefox or any other browser, thanks to the use of an ActiveX control), and link to this address. Download the new version of the ActiveX control, which will then bootstrap a process that will enable you to download the new version of the Application Loader for your desktop. The old Application Loader will not work for this purpose, and you might find that out the hard way unless you upgrade this way. Then be sure to exit IE and unplug your BlackBerry from the USB cable for a moment (the software should tell you when), then reconnect it before starting the upgrade.

    What passes for entertainment in BlackBerry App World.The upgrade process will back up your existing calendar, e-mail, media, and personal applications automatically, and will restore them after the new modules are loaded in and verified. The verification process, for some reason, is the longest stage -- be prepared to wait as long as 45 minutes. The process in its entirety could take an hour, maybe a little longer.

    Not all your old applications may work in the upgraded system without being replaced. Most surprisingly, BlackBerry App World is one of them. You'll need to manually uninstall it, then reinstall it from this address.

    BlackBerry System 4.5 running on an 8800 World Edition handset.You'll notice some differences right away, some thanks to the new system, others on account of smart users who truly appreciate the low value of farting apps. The catalog is much more pleasant to read, even if -- sadly -- some of the entries haven't changed all that much since App World's premiere earlier this month. The "before" and "after" pictures above tell the story. ("ECOE" isn't very self-explanatory, is it? It's a Ticketmaster application, so you'd think it would have been named something like "Ticketmaster Application.")

    Nothing makes a smartphone user happier than not being embarrassed. So much applause to the folks at RIM who, while they've been busy concocting all sorts of new goodies for the Storm (I hear something called speed is in the works), they decide to throw us old-timers from '07 and '08 a bone every now and then.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/16/There_will_be_an_Office_2010_public_beta_sometime__reasserts_Microsoft'

    There will be an Office 2010 public beta sometime, reasserts Microsoft

    Publié: avril 16, 2009, 11:34pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    The news from Microsoft Tuesday evening of the first technical previews of Office 2010, coming in the third quarter of this year, referred to being limited to several thousand testers -- Exchange product manager Julia White told us perhaps a few hundred thousand, after all the invitations were processed. But a technical preview is not exactly a "public beta," so when a prepared Q&A Monday with Microsoft's senior VP Chris Caposella failed to mention a public beta for Office 2010, some bloggers and journalists came to the conclusion that there wouldn't be one.

    So when Microsoft reported today that there would be a public beta, it was reported in various locations that the company had changed its mind. In fact, as a Microsoft spokesperson verified for Betanews this afternoon, not only was there no change of mind, but no statement regarding the lack of a public beta was ever made. Microsoft told Betanews earlier this week that there would be a public beta of Office 2010, though the company has not yet finalized a date.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/16/Code_frozen_Firefox_3.5_beta_gains_4%_more_speed_against_Chrome_2'

    Code-frozen Firefox 3.5 beta gains 4% more speed against Chrome 2

    Publié: avril 16, 2009, 10:13pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    We may see the latest Mozilla Firefox 3.5 public beta -- now with the whole numbering thing straightened out -- as soon as next Wednesday, and quite likely a Firefox 3.0.9 update in the same timeframe. In the meantime, as Mozilla's developers test the final nightly build prior to the opening of the floodgates, Betanews tests reveal that regular Firefox users should appreciate about double the speed and performance of Firefox 3.0.8, and 450% the performance of the final release of Microsoft Internet Explorer 8.

    But as Mozilla's developers make tweaks to its rendering engine and its new TraceMonkey JavaScript interpreter, Google's developers (some of whom, admittedly, are the very same people) are making tweaks to its development series browser, Chrome 2.0.172.6. (Google's development browser now co-exists with its Chrome 1 series, which represents finalized code.) As a result, our latest tests show Apple may not hold claim to "the world's fastest browser" for much longer, as Chrome 2 pulls within 2% of Safari's general performance, and as Firefox 3.5 makes up some ground.

    The latest performance scores for the April 16 Firefox 3.5 nightly build (intended for private testing) in Betanews tests are 17% better overall than for Firefox 3.1 Beta 3, a public beta released last month. This is on the strength of 18% better CSS rendering performance, 13% better JavaScript object handling, and 27% better overall JavaScript processing scores, in a suite of performance tests produced by independent developers and collected by Betanews.

    So as Chrome 2 improves, Firefox 3.5 improves even faster...though it still has quite a lot of ground to make up. Using Betanews' cumulative index scoring in which a 1.0 score represents the performance of Internet Explorer 7, the latest 3.5 nightly build scored a 9.19 -- meaning, when all the tests are ironed out, 919% the speed and performance of IE7, which even Microsoft has acknowledged to be something of a dog. Firefox 3.1 Beta 3 scored a 7.85 in these same tests, which were conducted on a Windows Vista-based Virtual PC environment (not the fastest, but still sufficient to gauge relative performance).

    The Safari 4 Beta scores from last month still represent the latest available index scores, with a 14.39 -- and Apple maintains the lead. But not by much, as Chrome -- the #2 chariot being driven by Charleton Heston -- pulls up uncomfortably close with a new score of 14.09, nearly 10% better than last month's build.

    Now, do you remember the non-competitive days of Web browsers, so very long ago...2008? Imagine if Web browsers got faster at a rate of 10% per month, every month? Real competition can certainly change the landscape.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/15/It_s_Office_2010__First_technical_previews_due_in_Q3'

    It's Office 2010: First technical previews due in Q3

    Publié: avril 15, 2009, 3:36am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Microsoft confirmed to Betanews Tuesday that the first technical previews of the applications suite we can now call Office 2010, will be distributed to special participants -- probably in limited number at first, just like before -- in the third quarter of this year.

    Julie White, a product manager for the Exchange Server team (which also has a major announcement this week), told Betanews that this limited number of initial testers will probably still number in the hundreds of thousands, suggesting that it will go beyond the usual MSDN and TechNet subscriber crowd. In tandem with this development track, SharePoint Server 2010, Visio 2010, and Project 2010 will all also enter technical preview during the same timeframe, especially since they will need to be tested together in order to take advantage of new features.

    Today's news is piggy-backed alongside the announcement, made official Tuesday night, that Exchange Server 2010 will enter its first public beta phase on Wednesday. Betanews will have more information to share about this news the moment the gates are officially opened, though we were told that many of Exchange Server's new features will make use of what could be a dramatically changed Outlook 2010 component.

    That could make Exchange difficult to test for now, especially since Outlook 2007 -- many of whose originally planned changes didn't make the final cut -- will not have facilities for new Unified Communications features. Many of those UC features have yet to be announced, though we can expect them to turn up in Outlook 2010. Radically improved handling of voice mail capability is slated for both ES 2010 and the next Outlook, White told us. Even the very first betas of ES 2010 will include the ability for Outlook to show textual previews of voice mails, with the server translating voice messages into text.

    As a result, your first peek at what Outlook 2010 could actually look like may come from ES 2010's Outlook Web Access. As ES admins know quite well, OWA is a management console for Web browsers designed to look and work just like Outlook. In the case of ES 2010, OWA will look just like Outlook 2010 is supposed to look.

    It's not the optimum state of affairs for Microsoft, which had to delay Office 14's progress last February for still undisclosed reasons. But it does show the company has had the courage to proceed with its Exchange rollout plan on schedule, including its emphasis on the server as a development platform for communications tools, using a newly turbo-boosted PowerShell as the basis for that platform.

    The first technical previews of Office 2010 will include native support for OpenDocument Format as an alternative default for the first time, along with revised support for the new ISO 29500 format that arose from Microsoft's OOXML standardization effort.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/15/ODF__PDF_become_part_of_Microsoft_Office_on_April_28'

    ODF, PDF become part of Microsoft Office on April 28

    Publié: avril 15, 2009, 1:11am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    In a post this afternoon in an unusual location -- the Microsoft Update blog rather than the Office blog -- the company officially gave its heads-up message that Office 2007 SP2 will be officially released in two weeks, on April 28. In it, users will have the ability to export their open OOXML and "compatibility mode" documents to Open Document Format and to Adobe's PDF format, in the company's first implemented stage of its support for alternate and interoperable document formats.

    This will not yet be the same as adopting .ODT documents, .ODS spreadsheets, and .ODP presentations as alternate standard formats for Office applications -- that feature is coming in the next edition of the suite, now due sometime next year. Up to now, the ability for Office 2007 apps to save to PDF and to XPS -- Microsoft's own try at an interoperable display format -- has been available as a downloadable add-in. Now, that functionality will be available to new users without the add-in needing to be installed.

    "The 2007 Office Suite SP2 has been tested and is supported for Internet Explorer 8," reads today's announcement. "Windows Vista SP2, Windows Server 2008 SP2, Windows 7, and Windows Server [2008] R2 will all be supported upon their release."

    Amid other enhancements in SP2: Long-time testers will recall how the new charting object model created for Excel 2007, failed to make the cut for Word and PowerPoint 2007, and then failed the cut again for SP1. Finally, SP2 bridges that gap, so the charting functionality for the main three applications is now evened out.

    And in an improvement that has been even longer in coming, the newly patched Access for business users will include the ability to export reports to Excel spreadsheets. Access is no longer among the more widely used database management products, being highly localized in this world of distributed data; yet many business classes including legal and real estate need a way to produce reports from queries that filter a database, in order to export that data yet again to a new line-of-business application. This addition will help those businesses lay stepping stones for such transitions.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/15/The_pendulum_swings_toward_Microsoft_in_the_Alcatel_Lucent_IP_battle'

    The pendulum swings toward Microsoft in the Alcatel-Lucent IP battle

    Publié: avril 15, 2009, 12:20am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    The intellectual property war which at one point had Microsoft owing Alcatel-Lucent a penalty of over $1.5 billion, may end up with the latter actually owing the former. First that penalty was reduced in light of new Supreme Court guidelines, and then last September an appeals court overturned the jury verdict, ruling in favor of Microsoft.

    Yesterday, Microsoft was handed another victory, as first reported by my friend and colleague Liz Montalbano at PC World, with the US Patent and Trademark Office overturning the validity of two Alcatel-Lucent patents, concerning methods for how a user selects calendar entries from an onscreen menu. Microsoft had owed the France-based holder of the Bell Labs patent portfolio some $357.7 million, which has since accrued interest.

    But should a federal court act on the USPTO's decision and throw out that verdict -- the last chunk of that billion-and-a-half -- all that may be left is the subject of Microsoft's counterclaim, for which it seeks a half-billion: a way to charge remote multimedia users for quality of service rather than for bytes downloaded. Whether Alcatel-Lucent would have any fight left in it to be willing to play defense, is doubtful.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/14/Amid_the_minus_signs__Intel_says_there_s_a_bright_side'

    Amid the minus signs, Intel says there's a bright side

    Publié: avril 14, 2009, 11:05pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    If it wasn't the general state of the economy taking a toll on Intel this quarter, the numbers from Santa Clara would send stockholders racing toward the nearest open window: Operating income nosedived 68% over the prior year's first quarter, to $670 million on $7.1 billion in revenue -- and that revenue number is less than three-quarters of what Intel was making this time last year.

    But in desperate need of a plus sign, Intel is seizing upon one this afternoon: After all that cost-cutting is taken into account, net income was up 176% over the disastrous fourth quarter of last year, to $647 million. That's enough to have CEO Paul Otellini proclaiming the worst is over, telling investors, "We believe PC sales bottomed out during the first quarter and that the industry is returning to normal seasonal patterns."

    We'll see whether that theory holds up after the company's quarterly analysts' call later this afternoon, where we could learn what type of adjustments accounted for that bounce. However, we've already seen evidence that one type of adjustment didn't yield the bounce that Intel usually hopes for: Even though it lumped together sales of its Atom processor -- which should be higher given the greater demand for netbooks and cheaper form-factor PCs -- with its chipsets, sales in that merged department fell 27% over the prior quarter, to $219 million.

    Chipsets are not one of Intel's strong suits, and typically contribute to low margins (and 45.6% is a big tumble from 53.1% in the prior quarter), so the fact that Atom wasn't contributing enough to help flatten the losses in the small components department, is not a good sign.

    Flash
    Flash Player 9 or higher is required to view the chart
    Click here to download Flash Player now
    if (typeof(embedWikichart) != "undefined") {embedWikichart("http://charts.wikinvest.com/WikiChartMini.swf","wikichartContainer_35E1A901-6F10-FD2D-6645-A673F0FD5484","570","365",{"showAnnotations":"true","ticker":"INTC","liveQuote":"true","rollingDate":"1 year"});}View the full INTC chart at Wikinvest

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/14/Ooma_s_service_provider_denies_a_role_in_outage'

    Ooma's service provider denies a role in outage

    Publié: avril 14, 2009, 10:20pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    During yesterday afternoon's complete service outage at private VoIP service provider Ooma, along with the simultaneous hold-up of e-mail delivery for some BlackBerry customers, the provider's chief marketing officer, Rich Buchanan, told both customers and reporters through his Twitter feed that Internap, a data co-location services provider, was to blame. But in a statement to Betanews this afternoon, an Internap spokesperson denied any kind of service problem on its side of the network.

    The spokesperson told Betanews, "There was a ticket opened with Ooma at Internap regarding this issue...Our NOC personnel determined that there was nothing happening within our network and that it probably was a problem after the hand-off was made from Internap to the Ooma network itself. The Ooma personnel are still investigating...The packet loss they were experiencing during the approximate 90 minutes they were having issues, likely caused dropped calls to their customers (as VoIP is very sensitive to packet loss)."

    UPDATE Late this afternoon, Internap's spokesperson added that Ooma's personnel are stating the matter is still under investigation, and that they're not ready to state the incident entirely happened on its side of the network.

    At one point via Twitter yesterday, after the service outage had largely subsided, San Francisco Business Times reporter Patrick Hoge questioned Ooma's Buchanan, "Are Ooma's issues linked to more widespread Internet problems in the Valley today?" Buchanan responded, "Ooma issues were linked to an outage at Internap. It also affected RIM, Google, Yahoo, Blue Cross, [T-Mobile], Verizon, and others."

    But Internap's spokesperson denied that any service ticket was opened with any other customer yesterday besides Ooma.

    Another Twitter user suggested to Buchanan, "Can I suggest some redundancy on voicemail at least, so if this happens at least vmail gets through?" His response: "You certainly can. Failures like this expose the corner cases and we sure found one today."

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/14/EC_may_sue_Great_Britain_to_stop_a_sweeping_data_interception_law'

    EC may sue Great Britain to stop a sweeping data interception law

    Publié: avril 14, 2009, 9:44pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    "Do you want the Internet to turn into a jungle?" asked European Commissioner for the Information Society and Media Viviane Reding, to open her weekly English-language address this morning. "This could happen, you know, if we can't control the use of our personal information online."

    Comm. Reding's message accompanied an announcement that the EC has launched the first stage in what could be a long, drawn-out series of proceedings against one of its own member nations, the United Kingdom. At issue is the UK's handling of online privacy laws, under the nearly two-year-old administration of Prime Minister Gordon Brown. The surface issue is what made news in the UK, at least in the general press: The EC has been concerned that the behavioral advertising service Phorm, a service built in association with leading UK carrier BT, may be enabling data collection policies that go beyond the limits mandated by EC directives.

    But if the EC's problem was with Phorm, it could have made its complaint to BT, which hasn't been even partly state-owned since the early 1990s. No, the EC's complaint against Britain itself runs somewhat deeper than that, although this morning Comm. Reding was only willing to show the proverbial knife with the blade withdrawn. A paragraph deep down in this morning's announcement from the Brussels government shows how even the EC can bury the lede:

    "Under UK law, which is enforced by the UK police, it is an offence to unlawfully intercept communications. However, the scope of this offence is limited to 'intentional' interception only," reads the EC statement. "Moreover, according to this law, interception is also considered to be lawful when the interceptor has 'reasonable grounds for believing' that consent to interception has been given. The Commission is also concerned that the UK does not have an independent national supervisory authority dealing with such interceptions."

    Although the EC's concern may have been triggered by an investigation into the Phorm matter, as the announcement suggests, nothing about Phorm's behavioral advertising scheme has anything to do with government interception of private messages. British subjects will readily point out that the "interception" language more likely points to a sweeping new extension to existing law proposed last March 16 by Security and Counter-terrorism Minister Vernon Coaker, in a presentation to a key parliamentary committee. That extension is part of what was introduced last year as the Intercept Modernization Programme.

    Under the UK's interpretation of the IMP, the new law would force ISPs in the UK to submit communications data regarding its users to a central database maintained by the government. As MP Coaker explains it, the IMP is necessary in order to carry out the EC's directives, which mandate that communications data regarding who speaks with whom, be kept on file for as much as two years. Coaker calls the creation of the IMP a "transposition" of EU law to the UK, as well as an extension of existing UK law regarding telephone traffic to Internet traffic.

    The purpose of a centralized database, MP Coaker explained to committee, would be to cut through all the red tape: "To minimize the bureaucratic burden on businesses, particularly small businesses, we want to avoid four or five different communications service providers retaining the same data. So, in discussions with the communications service providers, we will look at who has the various data sets and we will specify through the notice who is required to retain what."

    In his speech, MP Coaker suggested that the UK would only need to retain communications data for as little as 12 months. ISPs would only need to retain data in instances where they were specifically requested by government to do so, he said, although it seems impossible for a business to be able to present any data to authorities over a period of time if it had not been retaining that data to begin with.

    What some ministers are concerned about is whether the law would extend to communications traffic over social networks, like MySpace or Facebook. Coaker stated that might step beyond the boundaries of the "transposition," though in this particular case, he left the door open, saying he'd welcome working with other ministers in perhaps extending the transposition in that direction.

    European Commissioner for the Information Society Viviane Reding, in a weekly address April 14, 2009.Comm. Reding addressed one aspect of the social networking data problem in her address this morning: "Social networking has a strong potential for a new form of communication and for bringing people together, no matter where they are. But is every social networker really aware that technically, all pictures and information uploaded on social networking profiles can be accessed and used by anyone on the Web? Do we not cross the border of the acceptable when, for example, the pictures of the Winnenden school shooting victims in Germany are used by commercial publications just to increase sales? Privacy must in my view be a high priority for social networking providers and for their users."

    But this morning's statement from the EC bolsters Reding's comments with some principles that could be used in a pre-emptive proceeding against the UK. Specifically, it alludes to the possibility that if the centralized database created through the IMP revealed something about a person "unintentionally," perhaps by tying together that person's contacts across different media, that information may be justified by law enforcement officials the same way "plain sight" information in the US is deemed admissible in court without a warrant. And if the data behind such a revelation was compiled by someone or something outside of law enforcement entirely...say, a behavioral advertising service, then authorities could give themselves plausible deniability.

    "European privacy rules are crystal clear: A person's information can only be used with their prior consent," stated Comm. Reding. "We cannot give up this basic principle, and have all our exchanges monitored, surveyed and stored in exchange for a promise of 'more relevant' advertising! I will not shy away from taking action where an EU country falls short of this duty."

    Today's action against the UK marks the first stage of infringement proceedings. The government has two months to respond, after which the EC may issue a formal opinion. If that opinion is challenged or let stand, a formal court case may begin in the European Court of Justice.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/14/New_desktop_virtualization_scheme_will_enable_hybrid_Windows_deployment'

    New desktop virtualization scheme will enable hybrid Windows deployment

    Publié: avril 14, 2009, 7:22pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Just one month after its acquisition of a partner company called Kidaro, which produced desktop virtualization software, Microsoft yesterday announced the immediate availability of a greatly enhanced version of its own desktop virtualization package for its volume license customers. As part of its latest update to Desktop Optimization Pack (MDOP), Microsoft's new Enterprise Desktop Virtualization software (MED-V) will let companies deploy software running in older versions of Windows, to appear on clients running newer versions such as Vista.

    What this means is, software that ran fairly well in Windows XP or Windows 2000, and Web-driven software that runs using Internet Explorer 6 but not IE7 or IE8, can now run in a virtual envelope that leverages Virtual PC. Meanwhile, clients' users won't notice anything unusual; legacy apps' icons will appear on clients' desktops as though they were installed on their local systems.

    "MED-V builds on top of Microsoft Virtual PC to run two operating systems on one device, adding virtual image delivery, policy-based provisioning and centralized management," reads a blog post yesterday from the MDOP team's Ran Oelgiesser.

    Now, application virtualization is nothing new for Microsoft; two years ago, its acquisition of SoftGrid enabled server-driven apps to appear as though they were installed on the client, including apps for older Windows. What MED-V adds to the picture is a way for virtual envelopes to host legacy apps either way: through the server or through the client, whichever happens to be most convenient at the time. The management software -- part of the latest Kidaro acquisition -- introduces a centralized platform for software management, so even the applications running within the envelope can be governed by policies administered outside the envelope.

    Under the architecture of the new system, a central server governs all the images of virtual systems -- "partial desktops," if you will, that run in virtual envelopes. Those images are then distributed in whole or in part through Internet Information Server, depending on whether those images are to be run on the client or through the server, respectively. The client user only sees the products of the virtual image integrated with his regular desktop.

    Besides MSDN and TechNet subscribers, MED-V will be available only as part of MDOP, and only through Microsoft's volume licensing service. The terms of those licenses had to be adjusted last week, easing some restrictions so that customers running older OS platforms under virtualization don't have to get charged for those older systems all over again.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/14/VoIP_provider_Ooma_recovers_from_complete_service_outage'

    VoIP provider Ooma recovers from complete service outage

    Publié: avril 14, 2009, 5:00pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Update banner (stretched)

    12:45 pm EDT April 14, 2009 - This afternoon, a spokesperson for the data center co-location service Internap -- a name brought up in connection with both the Ooma and RIM service failures yesterday, which took place at approximately the same time -- denied any service outage, though admitted to some routine router replacements and maintenance.

    Ooma's single data center, mentioned by its technical VP in his blog post yesterday, is located on the West Coast -- Internap, meanwhile, is located in Atlanta. Any service issues related to both services may have had to have been incidental.

    11:00 am EDT April 14, 2009 - From the very beginning, its value proposition has been to enable businesses to rapidly create its Internet telecommunications infrastructure through the deployment of network hubs, sold for a one-time fee of $250. But yesterday, the complete hub infrastructure of the Ooma network failed completely for as much as four hours during lunchtime on the West Coast, in what its VP and technical chief late yesterday called an "event."

    "Between 2PM and 3PM [PDT Monday], Internet connectivity was slowly being restored to our service," wrote Dennis Peng on the company's blog. "However, the flood of ooma Hubs coming back online created an immense amount of load on our provisioning systems. We rushed to add capacity to the system, but the nature of the network outage had interfered with the system's ability to recover by itself."

    While VoIP competitors such as Skype rely on the variable P2P capacity of users' PCs to provide the network at large with the bandwidth it requires, Palo Alto-based Ooma's system relies on separate VoIP hardware clients -- the Ooma hubs. They're devices sold through outlets such as Amazon, and after paying the $250 and installing it themselves, customers pay no other fees for the lifetime of the product. Calls may be placed through Ooma to any telephone number -- not just Ooma users.

    As the company's sales pitch puts it, traditional phone companies "limit your choices and charge exorbitant fees for the luxury of using outdated services. Instead of answers, the phone company gives you dead ends. They have invested hundreds of billions of dollars into a system meant to lock you in." As Peng described yesterday, however, in a rather honest mea culpa, yesterday's outage appeared to be something related to the general Internet -- something that was out of Ooma's control. Though service engineers rushed to mitigate damage, he says something about this particular Internet outage made it impossible for the self-healing nature of the service's hub networks to repair their own damage. The long-term solution to prevent a reoccurrence of this problem, he added, may be to invest some money into the system.

    "Discussions have already started on how to make the service resilient to a similar event in the future. Ooma currently has one data center located in west coast," wrote Peng. "We have planned to light up a second data center in the midwest or east coast this year, and this outage has served as a stark reminder for us to get moving on that. This has also served as a good opportunity for us to re-evaluate our contingency and business continuity plans."

    Participants in Ooma's ongoing Twitter discussion yesterday blamed the service outage on a bigger service failure at Internap, a data center, co-location, and content delivery services provider based in Atlanta. They said such an outage affected multiple VoIP services and other major businesses, including Google's Gmail and RIM's BlackBerry service. Indeed, BlackBerry users did report a service backlog, during roughly the same timeframe yesterday, though no mail ended up being lost. Internap has not made any public statements regarding a service outage, and Betanews has contacted Internap for clarification.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/14/eBay_to_unload_StumbleUpon__and_that_might_not_be_all'

    eBay to unload StumbleUpon, and that might not be all

    Publié: avril 14, 2009, 12:01am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    According to a statement today from its chief architect and founder, Garrett Camp, the business relationship between content location service StumbleUpon and auction service eBay has ended.

    "This change will help StumbleUpon move quickly and stay true to its focus: helping people discover interesting Web content," Camp wrote. "Our goal is to make StumbleUpon the Web's largest recommendation engine, and we think this is the best way to get us there."

    Accelerating growth, ironically, was the reason Camp gave for eBay's acquisition of his company, upon completion of that deal in May 2007.

    According to reports from TechCrunch and others, eBay had hired Deutsche Bank last September to scout for a buyer for the content location service. That's the same Deutsche Bank that has frowned from the very beginning at eBay's acquisition of Skype, the P2P conferencing service. The bank's complaint has always been that Internet VoIP is a low-margin business -- too low for anyone investing in it for growth purposes.

    Last January's financial figures tell the tale. eBay's full-year revenue was nicely higher, by 11% annually to $8.54 billion, though revenue for the final quarter of the year dropped by 7% annually -- not a surprise in this economy. But income tapered off more sharply than eBay would have liked. The problem: low margin businesses putting a drain on the company's resources. GAAP operating margins for the final calendar quarter subsided from 28.7% to 22.3%.

    So StumbleUpon is one low-margin business off the books. Perhaps there will be another one: This morning, The New York Times reported that Niklas Zennstrom and Janus Friis, the fellows who sold Skype to eBay in September 2005 for a reported $2.6 billion, are seeking help from private equity firms to buy the service back.

    eBay's next quarterly report comes a week from Wednesday, with the annual stockholders' meeting planned for the following week. The time may be ripe to get rid of some more of those low-margin resources.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/13/Rather_than_submit_to_new_Korean_law__YouTube_turns_off_user_uploads'

    Rather than submit to new Korean law, YouTube turns off user uploads

    Publié: avril 13, 2009, 10:39pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    In the midst of a Draconian new South Korean law passed April 1 that could force some ISPs to enable lawmakers to suspend their customers' Internet accounts or face fines, Google's YouTube division has turned off some features that could, if misused under the new law, land its customers in prison.

    The South Korean National Assembly narrowly passed a sweeping new law whose purpose was to create a system of accountability for the nation's Internet users. While ostensibly the new law is designed to discourage piracy, Korean journalists such as Korea Times' Kim Tong-hyung provide evidence that the law's true purpose may be to enable government authorities to keep tabs on all kinds of online behavior, including political and social networking.

    The new law, whose translated bill title was the Orwellian sounding "Comprehensive Measures for Information Protection on the Internet," literally makes it a crime for someone to post defamatory information against another person, should that person register a complaint. In order to make the law workable, the country is instituting a "real-names" login system for all its ISPs, in an attempt to create some kind of audit trail leading every kind of transaction back to a traceable source. All this, ostensibly, in the name of protecting piracy.

    The English-language Korean news provider The Hankyoreh quotes the Internet division chief for the Korea Communications Commission, Lim Cha-shik, as heralding the law's passage as a way to calm citizens' fears "about an increase in the disadvantages associated with Internet use, such as personal information leaks and the spread of harmful information."

    Rather than find itself in the middle of a future political and civil rights debacle with another Southeast Asian nation, Google yesterday decided it would suspend the ability for its Korean users to upload any videos, or post any kind of commentary whatsoever alongside videos. The changes were announced last Thursday on the YouTube Korea blog.

    Also that day, the company's VP for global communications, Rachel Whetstone, posted a lengthy explanation, which also amounted to something of an apology. Translated into English, it loses a lot of its grace, although it quite clearly says that Google does not wish to be used as an instrument for governments' dissemination of information about its users.

    Whetstone also called into question the need for governments to monitor certain categories of communications, noting that while Germany has banned the practice of Nazism, it encourages communication about Nazism in order for citizens to be conscientious of how evil the practice actually is.

    This afternoon, Google sent Betanews an English-language human translation of the company's blog post, which includes this: "We have a bias in favor of people's right to free expression in everything we do. We are driven by a belief that more information generally means more choice, more freedom and ultimately more power for the individual. We believe that it is important for free expression that people have the right to remain anonymous if they choose."

    The Hankyoreh also did a much better job of translating Whetstone's Korean statement than Google's automated service, citing her as having written, "Google thinks the freedom of expression is most important value to uphold on the Internet. We concluded in the end that it is impossible to provide benefits to internet users while observing this country's law because the law does not fall in line with Google's principles."

    Users may still be able to overcome YouTube's roadblocks -- if they want to take the risk -- by changing their country of origin in their profiles to any other country. A YouTube blog page actually explains this. However, users may still be taking a risk, especially if the new real-names login system takes account of uploads at the ISP level. It doesn't seem likely that such a bypass would completely absolve YouTube, under the new law, from conspiracy to commit an insult.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/13/IE8_automatic_update_option_likely_to_begin_next_week'

    IE8 automatic update option likely to begin next week

    Publié: avril 13, 2009, 6:13pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    In a heads-up message on the company's IE blog over the Easter weekend, Microsoft Internet Explorer 8 lead program manager Eric Hebenstreit warned users that as soon as next week, some Windows users will be automatically given the option of downloading IE8. It will not be a massive land rush, and as Hebenstreit repeated, the company's new Web browser will not automatically install itself.

    "IE8 will not automatically install on machines," the program manager wrote, emphasizing what will be Microsoft's general policy in this new and more careful era of interoperability. "Users must opt-in to install IE8."

    So although users with Automatic Updates turned on may receive something next week, that something will not be the Web browser itself. Instead, it will be a "High Priority" (for XP and Windows Server 2003) or "Important" (for Vista and Windows Server 2008) "Welcome to Internet Explorer 8" message, giving the user the option of bothering her about the matter later, installing IE8 now, or not installing IE8 at all. Automatic Update won't bother the user about it again, Hebenstreit wrote, though she'll still be able to install IE8 manually if she changes her mind later.

    Last January, Microsoft began distributing for larger enterprises an IE8 Blocker Toolkit, a policy-based mechanism enabling admins to prevent their clients' automatic updates from receiving the Welcome Message package if they elect to remain with IE7 (or earlier). However, general users need not download this toolkit if they want to block the automatic update themselves. As the instructions reveal, the policy simply creates a new Registry key, HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Setup\8.0\DoNotAllowIE80. If that key exists and its value is set to 1, then automatic distribution of the IE8 Welcome package will be blocked, according to the company.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/10/Analyst_Roger_Kay_takes_a_cue_from_the_NAB__with_the__Mac_Tax_'

    Analyst Roger Kay takes a cue from the NAB, with the 'Mac Tax'

    Publié: avril 10, 2009, 11:42pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Endpoint Technologies analyst Roger L. KayIt should be no surprise, especially to long-time Mac users, that noted analyst Roger L. Kay, currently with Endpoint Technologies, is a supporter of the Windows "ecosystem." His opinions with regard to Windows are very much on the record, and he and I have often joined together with our colleagues, in brisk, lively, but fair discussions about the relative value of software and hardware on different platforms.

    So frankly, Kay's latest white paper (PDF available here) which is a cost examination for home users planning complete at-home networks on Windows vs. Mac platforms (which Microsoft admits to having sponsored), comes to conclusions which should be no surprise to anyone on two fronts: First, Kay illustrates how much more individuals are likely to pay for Apple versus brand-name equipment from suppliers such as Dell and HP. Second, Kay takes Apple to task for charging a premium, and that he's done so isn't news either.

    Not even Kay's key metaphor is particularly new for Kay; those of us who've carried on conversations with him have heard it before. But this week, Microsoft has been pushing Kay's white paper by attaching itself to that metaphor -- one that has the strange ring of a similar political approach being taken by the National Association of Broadcasters, on a completely separate issue. Kay -- and Microsoft -- are calling the extra money some customers are willing to pay for Mac equipment an "Apple tax."

    "For the past several years, Apple has been gaining share based on improved product offerings and an aggressive advertising campaign as well as Microsoft's stumble on Vista," Kay writes, shocking no one. "The combination of Apple's great execution and Microsoft's missteps has led a lot of converts to the Apple world. Mac is in flood tide. Cool is in... Or was in. Until the economic landscaped changed. Now, formerly carefree spenders are taking a sharper look at how much that cool really costs. And, oh, by the way, is it really so cool, while we're at it?"

    Endpoint Technologies analyst Roger L. KayKay's current message plays very well to Microsoft's current marketing message for Windows-based PCs...eerily well. In the last several days prior to the release of Kay's white paper, we've heard and read a lot about the Mac tax, whose symbology is apparently designed to convey the impression that Apple users are paying more for essentially the same equipment and performance. Kay himself has been part of the buildup, writing in BusinessWeek last month that Apple has been a victim of its own success in recent months, as evidenced by the rising number of malicious attacks against the Mac platform.

    One can't help but notice how well-produced Microsoft's own presentation of this message is being handled, including the IRS tax form mockup featured in Brandon LeBlanc's blog post yesterday.

    There's a certain political ring to Microsoft's metaphor of choice, an unavoidable resemblance to the NAB's campaign against the removal of performance royalties exemptions from terrestrial radio stations, a subject of heated deliberations in Congress for the last three years. The NAB's campaign this week was given the name "No Performance Tax," and paints the recording industry as a cartel conspiring against the entire music promotion business, to collect from broadcast radio what it collects from Internet radio.

    It's not a real tax, of course, but the way it's portrayed has made some believe that Congress is truly deliberating a real tax, a surcharge. Kay's language in his white paper isn't that sneaky, but the metaphor can be construed to present a picture that Apple is the principal recipient of a windfall surcharge -- in his hypothetical case, nearly $3,400 more being spent for a complete Apple-based home network, over comparable Windows-based products selling for under $2,700.

    But there's a few key facts that Kay left out. First of all, nothing at all about his hypothetical family's PC system of choice -- comprised of Dell and HP PCs, and accessories from notables such as Linksys, LiteOn, and Iomega -- is as hard-wired to Windows as the Apple-based system's parts are to Mac OS. To be fair, Kay's comparison is one between an x86-based platform where Windows is pre-installed, and one where Mac OS is installed -- and in the former case, a home user could choose Linux.

    If Linux entered this discussion, then Microsoft's typical argument against Linux could kick in. I've heard it before, even recently: Users will willingly pay something extra for reliability, for performance, for having real companies backing them up when problems arise, for having the better software. If there's a premium, then at least it's worth something. Now, if that argument sounds familiar, it's because it's the same argument Apple has used since the early 1990s to back up the "Mac premium;" and if Microsoft's avoiding echoing that argument, it's probably playing it smart.

    Secondly, the tax metaphor gives the impression that Apple is reaping a huge surplus for essentially the same equipment. It's not. Gross margins for Apple aren't terrible, but they haven't been great in recent months. At about 33% overall and flat, Apple does make more for its equipment than competitors such as Dell (18%), but it's not 74% -- which is a conclusion one could draw from Kay's numbers if he were to do the math the wrong way.

    Perhaps more importantly, though, it seems a little sad that after all these years of allowing others to make its case for it, Microsoft's first serious charge at Apple is based around something as plainly obvious to everyone who's ever purchased a Mac as its relative expense. By avoiding the qualitative side of the argument -- and I actually believe there is one -- Microsoft is leaving itself open to a counter-attack. And my fear is that Apple, clever as it may be, will create a professorial, number-crunching, metaphor-generating character to represent Microsoft's case...a character who, to some of us, at least, might look a little familiar.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/10/Another_round_of_AnyDVD_improvements_cracks_more_BD__discs'

    Another round of AnyDVD improvements cracks more BD+ discs

    Publié: avril 10, 2009, 7:03pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Download AnyDVD HD 6.5.4.0 from Fileforum now.

    In what's becoming a monthly affair for Slysoft, the makers of the DVD and Blu-ray backup disc system AnyDVD have released another update, this time with the capability to back up even more discs with the more sophisticated BD+ protection scheme.

    What was supposed to be the beauty of BD+, from publishers' perspective, was its versatility and adaptability in the face of new cracks. But as it turns out, much of the software being used to break protection is like AnyDVD HD -- used mostly for non-malicious purposes, by folks who want to protect the media they legitimately paid for. As a result, breaking the protection is becoming a viable commercial enterprise, making content publishers run faster to adapt broken encryption schemes.

    Early adopters are reporting today that use of AnyDVD and AnyDVD HD on systems with Avast Anti-virus software are receiving false positives that can be ignored. There's also a small, though growing, number of BD+ encoded discs that still cannot be copied yet, including the North American (Region A) edition of the Academy Award winning film, The Wrestler.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/10/Live_Labs_will_be_a_little_less_live__as_Lindsay_moves_to_RIM'

    Live Labs will be a little less live, as Lindsay moves to RIM

    Publié: avril 10, 2009, 6:02pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Restructuring is a process that a great many companies, both big and small, are going through in recent days. But Microsoft isn't accustomed to being one of those companies that shares its pain with its users -- case in point, the 2006 announcement of Windows Vista's delay, which was announced to the public as being "on track," in an unscheduled "road map update."

    The sad fact this morning is that Live Labs, the Microsoft project responsible for one of the most innovative promotions in all of software this year -- the synthesizing of hundreds of simultaneous photographs of Pres. Obama's inauguration -- is being downsized. This morning's announcement from Microsoft was an effort to say it's not painful and it doesn't mean too much and everything's fine, which in and of itself is an indicator that it's not.

    "A number of teams from within the lab will be joining product groups around the company," reads this morning's announcement. "For instance, the social streams team will be joining MSN. Some of our engineers will be helping out with the next generation of Windows Mobile. And others are off to Live Search and Microsoft Advertising. The rest of us will continue our work on building new web experiences, as we always have. But moving great people and projects into the product groups has always been part of our process, so today's news is entirely consistent with what we've always done."

    Maybe, if you count Research In Motion as a "product group." Don Lindsay, Live Labs' former high-profile director, is now listed on his own LinkedIn page as the Vice President for User Experience at RIM -- a huge win for the BlackBerry maker.

    In a January 2008 interview with Long Zheng of I Started Something, Lindsay described the work he was doing with Photosynth as creating the model for product development that could be accomplished incrementally and seamlessly, without the end user having to be too concerned with the stark and surprising nature of changes.

    "New technologies can be intimidating," Lindsay told Zheng, "so the challenge with something like Photosynth is figuring out how to best package and deliver it such that the technology effectively 'disappears' and users can simply dive in, be rewarded and not have to concern themselves with what is new or different. If the capabilities the technology enables are conspicuous, are valuable to the user and we don't consciously put roadblocks in their way, then we've been successful."

    It was a very non-Microsoft philosophy that Lindsay was building, and he was actually doing a pretty good job of it. He might not have made a very good fit at Google. But in joining RIM, Lindsay could be placing himself in the position of producing a trademark look-and-feel for a handset class that desperately needs it. Reports have inaccurately placed Lindsay at Apple during the time it was developing the iPhone; in fact, Lindsay's tenure at Apple only stretched to 2003. But during that time, he led the team that developed Aqua, the trademark appearance of Mac OS that remains the envy of Microsoft...which may continue for the foreseeable future.

    The Live Labs team was also responsible for Seadragon, the well-reviewed experimental system for picture browsing for mobile devices, most ostensibly and surprisingly the Apple iPhone. Those Live Labs engineers joining the Windows Mobile team may not be producing any more iPhone apps. And Matthew Hurst, the fellow behind the intriguing Social Streams aggregator of content from social networks, tells the world this morning, that he's excited to join MSN. He claims to be taking Social Streams with him, though MSN has never been known to be a developmental utopia.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/10/EU_to_debate_whether_the_Internet_has_outmoded_public_broadcasting'

    EU to debate whether the Internet has outmoded public broadcasting

    Publié: avril 10, 2009, 12:51am CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    In what could be a long, but fundamental, rethinking of the role governments play in subsidizing the public dissemination of information throughout Europe, the EU Parliament will begin considering a possible redrafting of regulations passed in 2001 regarding the role of public broadcasters -- which in Europe means the corporations subsidized by taxes or licenses rather than advertising. At issue is a new question brought about by the evolution of the Internet: As long as private entities are spending billions to make broadband and wireless information services available, why should public broadcasters get all the breaks and be subsidized to compete on the same level?

    "It must be noted that commercial broadcasters, of whom a number are subject to public service requirements, also play a significant role in achieving the objectives of the Amsterdam Protocol to the extent that they contribute to pluralism, enrich cultural and political debate and widen the choice of programmes," reads the latest draft of an EU commission report (PDF available here). That report calls for public comment on the role of public broadcasters' subsidies, during a period slated to begin in July.

    "Moreover," the report continues, "newspaper publishers and other print media are also important guarantors of an objectively informed public and of democracy. Given that these operators are now competing with broadcasters on the Internet, all these commercial media providers are concerned by the potential negative effects that State aid to public service broadcasters could have on the development of new business models."

    Up until the 1980s, Europe's public broadcasters (for example, the British Broadcasting Corp.) were widely perceived as the caretakers of culture for their native countries. Commercial competitors, while encouraged, were often viewed as infusing prurient, objectionable, and gratuitous elements into popular media, leading to the image (however false) of a national culture that's consistently at war with what amuses people.

    But the introduction of the Web led to a blurring of the distinctions between public and private media producers, such that the public itself appears to be neglecting to apply the usual distinction between the "cultural" side and the "popular" side. At this time last year, the European Commissioned opened up public comment for precisely this same debate, essentially asking whether there remains a class of media provider that's entitled to public assistance.

    At that time, a public media advocacy group called The Open Society Initiative, in a submission that is likely to be repeated again this year, argued that any perception that it's the public broadcasters that have the advantage is a false one. As the OSI wrote, "All public service broadcasters today -- even those performing most successfully -- are caught in an unsustainable and vicious circle whereby on the one hand they need to justify their privileges by offering standard-setting output in mainstream strands, while at the same time also provide services that commercial rivals do not offer, notably in cultural, educational, children's, and minority programming."

    Many public broadcasters have an intent to provide service over not just the Web, but the continent's growing mobile digital TV service. Should they qualify for state aid to do so, even though they may be making the same profit as commercial competitors? A controversial 2003 European court decision referred to as "the Altmark case" ruled that, mainly for diplomatic purposes (but also for appropriations bills), when a state government gives help to a public broadcaster or other utility to run its service, it's not officially state aid at all. In other words, think of it as the government fulfilling its civic duty, not running a media company.

    But since the EU is made up of many nations, and since all those nations do business with one another, the fact that public broadcasters are media companies mean they must do business internationally. Case in point again: the BBC, which distributes the rights to one of the US' most popular TV shows, "Dancing with the Stars," and operates the commercial BBC America cable channel in the States. Those are both commercial deals; so whatever you choose to call the money the BBC receives, is it still entitled to it...especially since the commercial products in these cases are exports, and therefore not vital to its nation's cultural health?

    "As the Court of Justice has observed: 'When aid granted by the State or through State resources strengthens the position of an undertaking compared with other undertakings competing in intra-Community trade the latter must be regarded as affected by that aid,'" reads this week's draft document from the EU commission. "This is clearly the position as regards the acquisition and sale of programme rights, which often takes place at an international level. Advertising, too, in the case of public broadcasters who are allowed to sell advertising space, has a cross-border effect, especially for homogeneous linguistic areas across national boundaries. Moreover, the ownership structure of commercial broadcasters may extend to more than one Member State. Furthermore, services provided on the Internet normally have a global reach."

    So now, the question becomes whether public broadcasters are entitled to...something, when competing against private entities in international markets, as opposed to cultural affairs? If the tide of public opinion shifts in favor of market forces, member states could foresee a time when they spin off all or part of their interest in public broadcasters and Internet content providers, perhaps in acquisition deals with existing media giants. And then the whole question of reducing competition rears its ugly head again.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/09/New_hope_for_US_memory_maker_Spansion_after_big_Samsung_settlement'

    New hope for US memory maker Spansion after big Samsung settlement

    Publié: avril 9, 2009, 9:09pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Bringing to a quiet end a case where one of America's brightest hopes for competing in microprocessors had vowed to go down swinging, Spansion -- the producer of flash memory born from an AMD spinoff -- settled its case brought last November against global NAND flash powerhouse Samsung. Spansion will receive a one-time payment of $70 million in cash, and the two companies have agreed to share each others' patent portfolios.

    The news is exactly what Spansion needs right now to survive, having filing for bankruptcy just last month. A few weeks ago, the company reported fiscal first quarter revenue of about $400 million, which isn't small change by any means. But that's a 15% annual drop, and the flash memory business has notoriously thin margins.

    One of the restructuring options on the table for Spansion -- at least as of a few weeks ago -- was the sale of some or even all of its manufacturing facilities, retaining only its IP portfolio and maintaining an "asset light" strategy. But if that's the type of division Spansion wanted to become in the first place, AMD might never have sold it off.

    In a press report this afternoon, Jim Handy of Objective Analysis said he believed the settlement was good news for Samsung as well. The crown jewel of Spansion's portfolio is a charge trapping technology that enables a single bit of memory to hold three possible states instead of just two. Now, that one-time payment, Handy said, may end up reducing Samsung's allotment for royalty payments over time.

    "The settlement is probably...a really good deal for Samsung's semiconductor business, since they will only make a one-time payment for Spansion's key charge trapping technology instead of an ongoing royalty stream which would most likely have been a percentage of sales, payments that could rise to considerably more than this one-time payment," Handy wrote. "Meanwhile, Samsung's cell phone group, who has awarded Spansion with vendor awards for a number of years, can feel more comfortable about the future of a valued supplier."

    As for Spansion, Handy said last November he thought the company would be forced to explore a merger, perhaps with a company like Broadcom with the muscle to compete against Samsung. But this agreement -- which Handy now concedes was "nervy" though fruitful for Spansion -- may have delayed or averted such an outcome.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/09/Microsoft_used_software_activation_without_a_license__jury_finds'

    Microsoft used software activation without a license, jury finds

    Publié: avril 9, 2009, 6:56pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    In a unanimous and complete decision by a Rhode Island US District Court jury yesterday, Microsoft was found guilty of willfully infringing upon an inventor's 1996 patent for a continual software activation and licensing system -- effectively saying that Microsoft stole the technology for preventing users from stealing its technology. The inventor -- an Australian named Richard B. Frederickson, III, of Uniloc Private, Ltd. -- was awarded $388 million USD, or more than half a billion Australian dollars.

    The records on Frederickson's suit, dating back to 2005, are too old for public online availability, otherwise we'd do our usual citation of the original suit. But the single patent that Frederickson was defending was for a system that only enables software to run at any time at all, only if the licensing mechanism lets it do so. It's the software activation scheme that has become one of Windows' and Office's trademarks -- the very system that Microsoft first introduced to Betanews in 2001. At that time, the company emphasized the discovery it claimed to have made, of a system that can detect when the underlying hardware for the software has been changed from the original point of licensing, to disable images of that software from being copied and run on multiple PCs.

    As Frederickson's 1996 patent summary describes it, "In broad terms, the system according to the invention is designed and adapted to allow digital data or software to run in a use mode on a platform if and only if an appropriate licensing procedure has been followed. In particular forms, the system includes means for detecting when parts of the platform on which the digital data has been loaded has changed in part or in entirety as compared with the platform parameters when the software or digital data to be protected was for example last booted or run or validly registered."

    The jury verdict showed that Microsoft's defense was not that Microsoft had discovered the concept of software activation for itself -- as it has claimed in 2001 -- but that Frederickson's patent was invalid due to prior art. Specifically, Microsoft claimed that another inventor came up with the basic distribution principle in a 1983 patent for a software distribution mechanism invention. The jury unanimously voted "No" on Microsoft's defense claim. Microsoft also claimed that the whole notion of software activation was obvious enough not to require an invention. Granted, Microsoft did not cite any patents it may have held -- if it had, it could have countersued on the basis of their validity. Again, the jury voted "No."

    In a statement first issued to Reuters this morning, Microsoft stated it will ask the court to overturn the verdict, but did not give any indication that it might appeal. A willful infringement victory, if upheld, does not necessarily mandate that Microsoft must now obtain a license to use the technology from its inventor; the $388 million may be seen as effectively covering any license fee.

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/09/An_accidental_alert_triggers_a_Live_Messenger_uproar'

    An accidental alert triggers a Live Messenger uproar

    Publié: avril 9, 2009, 4:56pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    If one of your friends or business contacts on Windows Live Messenger has a different handle now than he did a few days ago, the reason may be because he received a message from Microsoft telling her she needed to do so, on account of a "recent system enhancement."

    A blog post on Microsoft's Windows Live Messenger site yesterday explained that an unknown number of Messenger users may have received this alert in the center of their desktops. But the blog post apologized, saying the message was sent in error. "You will be able to continue to use your current e-mail address," the post read, "and there is no reason to make any changes."

    But users were unlikely to have read the blog post before having clicked the link on the message, which sent them to this 2006 set of instructions. A user following its instructions would have been taken to a page where she's asked to replace her "sign-in ID." Now, for Microsoft's purposes, that's the user's e-mail address with which her ID is associated; but the instructions clearly said that failure to associate the Live Messenger account with a different e-mail address could result in not being able to use Messenger.

    "If you receive an important service announcement from the Windows Live Messenger Service Staff that says you must change your sign-in ID in order to continue signing into Messenger, then you're affected by this change. We recommend that you take this action immediately upon receiving the notification. The process to change your sign-in ID is simple. If you don't make this change in the near future, you'll receive an error message when signing in, and won't be able to use the Windows Live Messenger Service until you change your sign-in ID," the instructions read.

    Those instructions, sadly, applied to the initial deployment of Live Messenger 2005 SP1, back in August 2006. That service pack was supposed to have been a security enhancement, but ever since its deployment, some users -- certainly not all, but still a good number -- have reported periodic service disruptions.

    Users who fill out the complete sign-in ID changing form may also end up changing their profiles as well -- the link to do so is included -- even though profile changes were unnecessary, even three years ago.

    So yesterday's rerun of the SP1 deployment message from three years ago ended up triggering users who have had this problem ever since that message premiered, to voice their concerns to Microsoft as comments to yesterday's blog post.

    "I just spent an hour trying to figure this problem out but can't with your endless links to nothing [but] support pages, and [I] finally come across this!" wrote one Messenger user yesterday. "Thanks! For wasting everyone's time!"

    Some users reported not being able to log on in recent days, and suspected this issue may be related to it. Many others, upon receiving the message, suspected it was some kind of phishing scheme or hoax. And more than one made this suggestion: Why couldn't Microsoft use the same mechanism for sending the original false alert, to send another one that says, please ignore?

    "I got one of these messages also, and if it weren't out of luck, I probably wouldn't have found this post," another user wrote. "Microsoft/Windows/whoever responsible, needs to make sure that all users know that this is simply an error. Sure, nothing happens when you do click, but people deserve an official explanation."

    Another user gave up after her Messenger started sending sporadic porn links to all her Messenger contacts, and assumed that this message was simply part of the malicious Trojan reposnsible. "Most of my contacts have blocked me, she wrote, before adding, "Thanks MSN for the wonderful experience."

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/08/FCC_s_McDowell__Careful_that__national_broadband__isn_t_just_for_cable'

    FCC's McDowell: Careful that 'national broadband' isn't just for cable

    Publié: avril 8, 2009, 10:52pm CEST par Scott M. Fulton, III

    By Scott M. Fulton, III, Betanews

    Normally, Congressional legislation regarding broadband Internet service takes the time to define its terms. Right now, for purposes of US law in a rapidly changing technology climate, the law itself defines the term rather broadly. For example, 7 USC 31 section 950bb defines the phrase this way: "The term 'broadband service' means any technology identified by the Secretary as having the capacity to transmit data to enable a subscriber to the service to originate and receive high-quality voice, data, graphics, and video."

    The keyword there is any, letting broadband effectively mean anything that serves the Internet at high speed and bandwidth. But appropriations bills are laws in a very different sense -- they don't amend US Code. They simply say, here's some money, and here's what it will be spent on...and maybe they give definitions, and maybe they don't.

    In the case of H. R. 1, the massive federal stimulus bill signed into law last February 13, an appropriation was made for the FCC to build some kind of national broadband policy. As the final form of the bill reads: "That of the funds provided under this heading, amounts deemed necessary and appropriate by the Secretary of Commerce, in consultation with the Federal Communications Commission (FCC), may be transferred to the FCC for the purposes of developing a national broadband plan or for carrying out any other FCC responsibilities pursuant to division B of this Act, and only if the Committees on Appropriations of the House and the Senate are notified not less than 15 days in advance of the transfer of such funds."

    There's nothing in the bill that says what "broadband" means; and for FCC Commissioner Robert McDowell -- one of only three active commissioners during a long transition period -- that sent up a red flag. Though yesterday begins a long public response period, during which the FCC is seeking advice and input from the general public, Comm. McDowell cautioned that the new team -- perhaps five members strong before too much longer -- should refrain from presuming, as lawmakers have done in the past, that broadband is provided by cable TV providers.

    FCC Commissioner Robert M. McDowell"It is critical that our plan be competitively and technologically neutral," wrote McDowell yesterday (PDF available here). "Given the incredibly diverse nature of our country -- both in terms of geography and demographics -- our plan must not favor one particular technology or type of provider over another, even inadvertently. Broadband deployment throughout America simply is not a one-size-fits-all proposition. Wireline, wireless, and satellite technologies are meaningful alternatives, each worthy of our attention. For instance, to deny the people of Alaska the benefits of broadband connectivity via wireless and satellite would be tantamount to isolating the tens of thousands of Americans who live on Native lands and in subsistence villages. Thus, as we proceed, we must be mindful of the law of unintended consequences before making any new rules."

    McDowell went on to suggest that new rules be left open-ended, in order that new technologies such as white space transmission can be utilized by smaller companies that can get more done with less investment. The alternative, he warned, would be to repeat the usual pattern of the government (by way of the FCC) deciding who wins and who loses, and in so doing crafting laws whose rules are bound so much to the technology of the moment that they lead to a kind of red tape he called "whimsical regulatory arbitrage."

    McDowell's warning comes in response to the FCC's call yesterday for public comments regarding the nature of a national broadband plan, to be delivered to Pres. Obama on February 17, 2010. The stated goal of the plan is to advise the President on "the most effective and efficient ways to ensure broadband access for all Americans." But what has yet to be determined is whether the plan will be limited to funding the builders of new network infrastructure, or whether it will go further and potentially grant municipal-bypassing national licenses to major players. Three years ago, legislation that Congress failed to pass would have created a national licensing scheme, using language that appeared to favor CATV providers such as Comcast and Cox over telecommunications carriers such as Verizon and AT&T.

    Acting FCC Chairman Michael Copps' statement appears to give a nod to McDowell on this matter, saying, "Our Notice of Inquiry seeks to be open, inclusive, outreaching and data-hungry. It seeks input from stakeholders both traditional and nontraditional -- those who daily ply the halls of our hallowed Portals, those that would like to have more input here if we really enable them to have it, and those who may never have heard of the Federal Communications Commission. It will go outside Washington, DC to rural communities, the inner city and tribal lands. It will go where the facts and the best analysis we can find take it. It will look at broadband supply and broadband demand. It will look at broadband quality and affordable prices. It will endeavor to better understand, and hopefully build upon, the cross-cutting nature of what broadband encompasses, beginning with an appreciation that it brings opportunities to just about every sphere of our national life.

    "And it can also consider, in addition to the many opportunity-generating characteristics of broadband, how to deal with any problems, threats or vulnerabilities that seem almost inevitably to accompany new technologies," Chairman Copps continued. "Ensuring broadband openness, avoiding invasions of people's privacy, and ensuring cybersecurity are three such challenges that come immediately to mind. We have never in history seen so dynamic and potentially liberating a technology as this. But history tells us that no major technology transformation is ever a total, unmixed, problem-less blessing."

    Copyright Betanews, Inc. 2009
  • Lien pour 'BetaNews.Com/2009/04/08/German_gov_t_fines_Microsoft_for__influencing__Office_resale_prices'

    German gov't fines Microsoft for 'influencing' Office resale prices

    Publié: avril 8, 2009, 6:00pm CEST par Scott M. Fulton, III

    This morning, Germany's Bundeskartellamt -- the anti-cartel department of the country's executive branch -- has issued a €9 million fine against Microsoft for what it describes as illegally and anti-competitively influencing the retail sales value of Office Home & Student Edition 2007.

  • Lien pour 'BetaNews.Com/2009/04/08/The_news_doesn_t_want_to_be_free'

    The news doesn't want to be free

    Publié: avril 8, 2009, 5:00pm CEST par Scott M. Fulton, III

    In this era of apparently free information over the Internet, is there still a value to journalism that one can pin a price tag on? The AP says yes.

  • Lien pour 'BetaNews.Com/2009/04/07/New_Obama_DOJ_claims_sovereign_immunity_in_wiretap_case'

    New Obama DOJ claims sovereign immunity in wiretap case

    Publié: avril 7, 2009, 11:21pm CEST par Scott M. Fulton, III

    An ancient, unwritten principle that the Constitution itself failed to waive, is the basis for a Justice Dept. petition to dismiss a FISA-related civil suit.

  • Lien pour 'BetaNews.Com/2009/04/07/The_new_Nehalem_based_Apple_Xserves_promise_a_price_advantage'

    The new Nehalem-based Apple Xserves promise a price advantage

    Publié: avril 7, 2009, 6:52pm CEST par Scott M. Fulton, III

    One doesn't always get the opportunity to tout Apple in a discount category leader, so one takes all the opportunities one can get...and this is one.

  • Lien pour 'BetaNews.Com/2009/04/07/Spybot_Search___Destroy_competitors_are_trying_to_force_its_removal'

    Spybot Search & Destroy competitors are trying to force its removal

    Publié: avril 7, 2009, 6:15pm CEST par Scott M. Fulton, III, Nate Mook and Angela Gunn

    It's hard enough to make a name for yourself as an independent anti-malware company, without competitors demanding your software be uninstalled.

  • Lien pour 'BetaNews.Com/2009/04/07/Google_extends_search_localization_to_the_desktop'

    Google extends search localization to the desktop

    Publié: avril 7, 2009, 5:07pm CEST par Scott M. Fulton, III

    Users of Google's mobile services on handsets are familiar with how its search service can assume they're looking for something in its own general vicinity, even using GPS location. That level of detail hasn't always been available through desktop searches, though in recent months, I've noticed Google had been testing the concept off and on. I'd assumed the company was judging my approximate geography using my IP address.

  • Lien pour 'BetaNews.Com/2009/04/02/Senate_will_debate_one_more_Obama__czar___this_time_for_cybersecurity'

    Senate will debate one more Obama 'czar,' this time for cybersecurity

    Publié: avril 2, 2009, 12:20am CEST par Scott M. Fulton, III and Angela Gunn

    A bill that moves responsibility for the nation's online security outside the auspices of the Homeland Security Dept., was introduced on the Senate floor today.

  • Lien pour 'BetaNews.Com/2009/03/30/Virginia_anti_spam_law_now_dead_after_Supreme_Court_rejects_appeal'

    Virginia anti-spam law now dead after Supreme Court rejects appeal

    Publié: mars 30, 2009, 11:12pm CEST par Scott M. Fulton, III

    After losing a unanimous decision by the state's Supreme Court last September, the State of Virginia appealed to the US Supreme Court to breathe new life into an anti-spam law that was intended to put serial spammers behind bars. A constitutional rights appeal by convicted spammer Jeremy Jaynes, convicted in 2005 and sentenced to nine years' imprisonment, met with overwhelming victory, but state lawmakers saw the nation's highest court as their last chance.

  • Lien pour 'BetaNews.Com/2009/03/30/Now_Western_Digital_enters_the_SSD_market_with_SiliconSystems_buyout'

    Now Western Digital enters the SSD market with SiliconSystems buyout

    Publié: mars 30, 2009, 10:18pm CEST par Scott M. Fulton, III

    WD once said it would enter the solid-state disk drive industry the moment it actually existed. Knock, knock.

  • Lien pour 'BetaNews.Com/2009/03/30/New_Nvidia_GPUs_geared_to_work_with_multiple_physical__virtual_systems'

    New Nvidia GPUs geared to work with multiple physical, virtual systems

    Publié: mars 30, 2009, 8:59pm CEST par Scott M. Fulton, III

    The Quadro FX professional graphics card line includes SLI Multi-OS support, which is supported by Parallels.

  • Lien pour 'BetaNews.Com/2009/03/30/IBM_deal_with_Sun_could_leave_Fujitsu_servers_up_in_the_air'

    IBM deal with Sun could leave Fujitsu servers up in the air

    Publié: mars 30, 2009, 6:35pm CEST par Scott M. Fulton, III

    With the industry at large collectively having verified that IBM and Sun Microsystems are in talks toward a possible merger deal, the question of the fate of Sun's long-standing SPARC system architecture becomes a topic of intense conversation. Today, a Fujitsu America executive probably did the opposite of what he'd intended, first by telling Reuters he wouldn't comment, and then commenting in a way he might not have planned on.

  • Lien pour 'BetaNews.Com/2009/03/30/Behold_the_Open_Cloud_Manifesto__Insert_your_ideas_here'

    Behold the Open Cloud Manifesto: Insert your ideas here

    Publié: mars 30, 2009, 5:27pm CEST par Scott M. Fulton, III

    "This document is intended to initiate a conversation that will bring together the emerging cloud computing community," reads the new Manifesto's preamble.

  • Lien pour 'BetaNews.Com/2009/03/24/Final_SuSE_Linux_11_includes_Moonlight_1.0_for_Silverlight'

    Final SuSE Linux 11 includes Moonlight 1.0 for Silverlight

    Publié: mars 24, 2009, 11:09pm CET par Scott M. Fulton, III

    The first link in a bridge between Web site architectures is now officially part of a Novell Linux distro.

  • Lien pour 'BetaNews.Com/2009/03/24/Controversial_copyright_violator_provision_struck_down_in_New_Zealand'

    Controversial copyright violator provision struck down in New Zealand

    Publié: mars 24, 2009, 9:18pm CET par Scott M. Fulton, III

    If someone appears to be sharing an unauthorized file a lot of times, should his ISP do something to stop it? A law requiring it to do so has been stalled.

  • Lien pour 'BetaNews.Com/2009/03/24/Mozilla_experiments_more_with__New_Tab__in_Firefox_3.1'

    Mozilla experiments more with 'New Tab' in Firefox 3.1

    Publié: mars 24, 2009, 4:16pm CET par Scott M. Fulton, III

    Google Chrome and Microsoft Internet Explorer have added functionality to their fresh browser tabs. Should Firefox take a different tack?

  • Lien pour 'BetaNews.Com/2009/03/21/Can_Mozilla_escape_a_premature_endgame_for_Firefox_'

    Can Mozilla escape a premature endgame for Firefox?

    Publié: mars 21, 2009, 12:07am CET par Scott M. Fulton, III

    It was born from the remains of Netscape Navigator, but suddenly, there's a plausible scenario pointing to Firefox becoming the victim of its own success.

  • Lien pour 'BetaNews.Com/2009/03/20/Slow__but_steady_usage_share_growth_in_IE8_s_first_day'

    Slow, but steady usage share growth in IE8's first day

    Publié: mars 20, 2009, 9:52pm CET par Scott M. Fulton, III

    The early numbers from Web analytics firm NetApplications indicate a slower than expected, but steady uptick in usage share for Microsoft Internet Explorer 8, a product which was introduced at noon yesterday on the East Coast. It's not being pushed as an update to the Windows operating system, so trading up for now is still a voluntary affair for users.

  • Lien pour 'BetaNews.Com/2009/03/20/Skydiving_through_the_cloud__Windows_Azure_gambles_with__Full_Trust_'

    Skydiving through the cloud: Windows Azure gambles with 'Full Trust'

    Publié: mars 20, 2009, 7:54pm CET par Scott M. Fulton, III

    Up to now, Azure has been a cloud-based staging service for the .NET platform. To go beyond that stage, that cloud has to be able to run native code for its customers.

  • Lien pour 'BetaNews.Com/2009/03/20/Much_ado_about_undo__A_new_Gmail_feature_literally_lasts_five_seconds'

    Much ado about undo: A new Gmail feature literally lasts five seconds

    Publié: mars 20, 2009, 3:45pm CET par Scott M. Fulton, III

    In perhaps another sterling demonstration of the effectiveness of Google's own product announcements by way of its blog posts, the world awakened this morning to an experimental capability in Google's Gmail that, if you think about it, you wonder why no one's thought about it before: An independent developer with the handle Yuzo F is distributing a Gmail add-on that gives users five seconds after clicking on the Send button to click on an Undo link that stops distribution from going forward.

  • Lien pour 'BetaNews.Com/2009/03/19/Performance_test__IE8_easily_doubles_IE7_speed'

    Performance test: IE8 easily doubles IE7 speed

    Publié: mars 19, 2009, 9:31pm CET par Scott M. Fulton, III

    Internet Explorer 8's speed will be better, its product manager said today, for folks who live in the real world. Tell that to the planet with the Firefox and Safari users.

  • Lien pour 'BetaNews.Com/2009/03/17/The_heat_is_on__Latest_Google_Chrome_closes_the_gap_with_Safari_4__Firefox_3.1'

    The heat is on: Latest Google Chrome closes the gap with Safari 4, Firefox 3.1

    Publié: mars 17, 2009, 10:55pm CET par Scott M. Fulton, III

    It was the 1990s when the speed and performance of a computing tool to be used by millions, mattered this much. This afternoon, Google is answering the call.

  • Lien pour 'BetaNews.Com/2009/03/17/The_end_of_the_PC_pothole__for_everyone_but_Apple'

    The end of the PC pothole, for everyone but Apple

    Publié: mars 17, 2009, 7:27pm CET par Scott M. Fulton, III

    Blue skies may be shining for America's PC makers, but the high-priced Macs aren't joining the party, says NPD's Stephen Baker.

  • Lien pour 'BetaNews.Com/2009/03/17/Microsoft_in_an_IP_deal_with_manufacturer_that_brought_DMCA_case'

    Microsoft in an IP deal with manufacturer that brought DMCA case

    Publié: mars 17, 2009, 5:22pm CET par Scott M. Fulton, III

    What Lexmark so vehemently sought to keep secret five years ago, it's happy to share now, at least with Microsoft.

  • Lien pour 'BetaNews.Com/2009/03/17/A_phishing_scheme_may_have_exposed_700_Comcast_customers'

    A phishing scheme may have exposed 700 Comcast customers

    Publié: mars 17, 2009, 4:29pm CET par Scott M. Fulton, III

    A document that appeared on the online sharing service Scribd appeared to show thousands of comcast.net accounts, along with their passwords. It was probably posted there as a display of somebody's phishing prowess, though it would appear it took two months or more before anyone finally noticed.

  • Lien pour 'BetaNews.Com/2009/03/17/AMD_to_Intel__We_ll_come_clean_if_you_will'

    AMD to Intel: We'll come clean if you will

    Publié: mars 17, 2009, 4:05pm CET par Scott M. Fulton, III

    This morning, AMD raised the bid in a high-stakes game of poker with Intel, and it's looking more and more like both sides are playing according to big plans.