Friday, June 15, 2012

The Dark Eye Chains of Satinav SKIDROW


SKIDROW
the leading force

proudly presents
The Dark Eye: Chains of Satinav (c) Daedalic Entertainment

15-06-2012....Release Date <-> Protection.................SecuRom
Adventure........Game Type <-> Disk(s).....................1 DVD9

RELEASE NOTES

For centuries the kingdom of Andergast has been at odds with
neighboring Nostria, but now first steps are being undertaken toward a
lasting peace. But a plague of crows troubles the king, for the birds
are acting with unusual aggressiveness, even attacking humans. As the
belligerent creatures infiltrate even the castle itself, the king seeks
a skilled bird catcher - an opportunity for young Geron to prove that
the reputation for ill luck that has followed him since childhood is
undeserved.

However, the task will prove much more difficult than he expects,
leading him on an adventure that will take him to the borders of the
charted lands of Aventuria and beyond.

INSTALL NOTES

1. Unpack the release
2. Mount or burn image
3. Install
4. Copy everything from the SKIDROW folder into the game installation
5. Play the game
6. Support the companies, which software you actually enjoy!

GREETINGS

To all friends of the family and honorable rival groups!

ascii art by the
godlike & terrific duo
malodix + irokos
titan artdivision

Namco on the challenges of porting Dark Souls: “we don’t really have that strong PC experience”


 
Dark Souls There were some worrying noises from the Dark Souls camp during E3 last week. Dark Souls producer Daisuke Uchiyama told Eurogamer that From Software “haven’t been able to step up into doing any specific optimisation for PC,” admitting that the framerate problems present in the console versions will likely persist. “It’s more strictly a port from the console version,” he said.
Later in the show, Graham asked Nobu Taguchi of Namco Bandai America about the challenges of bringing Dark Souls to PC. Taguchi painted a picture of a studio surprised by the sudden demand for a PC version, struggling to meet the expectations of a new audience. He admits that “from an experience background From Software and Namco Bandai ourselves, we don’t really have that strong PC experience.”

The project started when a petition showing support for a PC version of Dark Souls gained tens of thousands of signatures within a month. That spurred Namco Bandai into action. “At that point that’s when we brought it over to From Software to discuss the concept of ‘are you able to create this PC version of the game that everybody is asking for?’” Saguchi explained. “From Software being very community orientated said that “We’ll try out best” but one of the concepts they were fearing was that just bringing out a straight port wouldn’t suffice at all.”
From Software decided to expand the game to “alleviate” the optimisation drawbacks, in Taguchi’s words, “to create a brand new location and a strong extension which really expands what the game was originally about.” That extension takes includes the extra bosses and a new PvP mode being slotted into the Prepare to Die edition.
Will it be worth putting up with poor performance to access the new areas? Saguchi suggests that the severity of the port problems will vary depending on the power of the player’s machine. “While the game hasn’t been tweaked itself, because it’s very difficult to tweak, but for people who play on the PC, which is arguably a lot more stronger format to work off of, it does improve the framerate issues,” he said.
“I think it’s really inherent on the person’s setup in terms of what kind of power the game can use. So it’s a little bit more difficult to determine, it really kind of shifts along with the processor that you’re selling.”
“It’s definitely going to be better than the console version,” he added later. “It’s just that in terms of what PC gamers are maybe looking at in terms of what they usually play, it may not match up.”

Dallas 2012


You're not dreaming: The cast of TNT's "Dallas" 2012 reboot is naked. Together.
In an homage to the famous shower scene during the original series' eighth season -- in which Bobby Ewing (Patrick Duffy), lathered up and greeted Pam, revealing all of Season 8 was a dream -- TNT had its stars, both new and old, strip down to towels to promote the show's revival.
TNT's "Dallas" picks up years after the original series ended. Original series stars Duffy, Larry Hagman and Linda Gray are joined by a new crop of young Hollywood actors including Jordana Brewster, Jesse Metcalfe and Josh Henderson. Metcalfe plays Christopher Ewing, the adopted son of Bobby and Pam. Henderson plays John Ross Ewing III, the son of J.R. (Hagman) and Sue Ellen (Gray) Ewing. Expect the younger Ewing boys to clash just as much as J.R. and Bobby did, and not just when it comes to oil.
Check out the full version of the photo below.

Cast

Series cast summary:
Larry Hagman Larry Hagman ...
J.R. Ewing (4 episodes, 2012)
Patrick Duffy Patrick Duffy ...
Bobby Ewing (4 episodes, 2012)
Josh Henderson Josh Henderson ...
John Ross Ewing (4 episodes, 2012)
Jesse Metcalfe Jesse Metcalfe ...
Christopher Ewing (4 episodes, 2012)
Jordana Brewster Jordana Brewster ...
Elena Ramos (4 episodes, 2012)
Steven Jeffers Steven Jeffers ...
Head Doctor (4 episodes, 2012)
Jennifer Besser Jennifer Besser ...
Bar Patron (3 episodes, 2012)
Kimberly Lynn Campbell Kimberly Lynn Campbell ...
Prison Nurse (2 episodes, 2012)
A.C. Hensley A.C. Hensley ...
Chef (2 episodes, 2012)
Kevin Page Kevin Page ...
Bum (2 episodes, 2012)
Dianne Sullivan Dianne Sullivan ...
Detective Johanna Johnson (2 episodes, 2012)

Dallas.2012.S01E01.720p.HDTV.X264-DIMENSION  
Dallas.2012.S01E02.720p.HDTV.X264-DIMENSION 

Pilot: Changing of the Guard
Two decades since viewers last saw them, the Ewings are back. Television's first family of drama, sabotage, secrets and betrayal are gathering at Southfork Ranch for the upcoming wedding of Bobby's adopted son, Christopher, to Rebecca Sutter. Although the occasion is joyous, an old family rivalry crosses generations after secret oil drilling on Southfork results in a major gusher. Everyone has his or her own agenda when the fight over oil and land threatens to tear the Ewings apart once again. 
Hedging Your Bets
The plot to take control of Southfork gets complicated when two-timing affairs and blackmail arise, and J.R. starts to ask one too many questions. Meanwhile, Christopher and Elena bury their feelings in order work together on a business deal. But Christopher's new bride may be hiding her own secrets.
 

The future of AMD’s Fusion APUs: Kaveri will fully share memory between CPU and GPU


AMD is hosting its Fusion Developer Summit this week, and the overarching theme is heterogeneous computing and the convergence of the CPU and GPU. During the initial keynote yesterday, Senior Vice President and General Manager of Global Business Units (for AMD) Dr. Lisa Su stepped on stage to talk about the company’s future with HSA (Heterogeneous System Architectures).
One of the slides she presented showed off the company’s Fusion APU roadmap which included the Trinity APU’s successor — known as Kaveri. Kaveri will be able to deliver up to 1TFLOPS of (likely single precision) compute performance, thanks to its Graphics Core Next (GCN) GPU and a Steamroller-based CPU. The really interesting reveal, though, is that Kaveri will feature fully shared memory between the GPU and CPU.


AMD has been moving in the direction of a unified CPU+GPU chip for a long time — starting with the Llano APU — and Kaveri is the next step in achieving that goal of true convergence. AMD announced at the keynote that “we are betting the company on APUs,” and spent considerable time talking up the benefits of the heterogeneous processor. Trinity, the company’s latest APU available to consumers, beefs up the GPU and CPU interconnects with the Radeon Memory Bus and FCL connections. These allow the GPU access to system memory and the CPU to access the GPU frame buffer through a 256-bit and 128-bit wide bus (per channel, each direction) respectively. This allows the graphics core and x86 processor modules to access the same memory areas and communicate with each other.
Kaveri will take that idea even further with shared memory and a unified address space. The company is not yet talking about how it will specifically achieve this with hardware, but a shared on-die cache is not out of the question — a layer that has been noticeably absent from AMD’s APUs. Phil Rogers, AMD Corporate Fellow, did state that the CPU and GPU would be able to share data between them from a single unified address space. This will prevent the relatively time-intensive need to copy data from CPU-addressable memory to GPU-addressable memory space — and will vastly improve performance as a result.
AMD gave two examples of programs and situations where the heterogeneous architecture can improve performance — and how shared memory can push performance even further. The first example involved face detection algorithms. The algorithm involves multiple stages where the image is scaled down and the search square remains the same. In each stage, the algorithm looks for facial features (eyes, chins, ears, nose, etc.) If it does not find facial features, it discards the image and continues searching further scaled down images.
he first stage but thCPU vs GPU at face detection algorithm.
Smaller numbers are better (represents shorter processing times)
The first stage of the workload is very parallel so the GPU is well-suited to the task. In the first few stages, the GPU performs well, but as the stages advance (and there are more and more dead ends), the performance of the GPU falls until it is eventually much slower than the CPU at the task. It was at this point that Phil Rogers talked up the company’s heterogeneous architecture and the benefits of a “unified, shared, coherent memory.” By allowing the individual parts to play to their strengths, the company estimates 2.5 times the performance and up to a 40% reduction in power usage versus running the algorithm on either the CPU or GPU only. AMD achieved the (best) numbers by using the GPU for the first three stages and the CPU for the remaining stages (where it was more efficient). It was further made possible because they did not have to worry about copying the data to/from the CPU and GPU for processing which would have slowed down performance too much for HSA to be beneficial.


The company’s second HSA demo drilled down into the time-intensive data copying issue even more. To show off how the shared memory cuts down of execution time, and solves the shared memory problem (of course), the company presented a server application called Memcached (mem cache D) as an example. Memcached is a table of files kept in system memory (i.e. ECC DDR3) that they use store() and get() functions on to serve up components of web pages without needing to pull the data from (much slower) disk memory.
When the get() function is ported to the GPU, performance of the application is improved greatly due to its proficiency at parallel work. However, the program then runs into a performance bottleneck due to a needed data copy operation to bring the data and instructions from the CPU to the GPU for processing.
AMD demos HSA accelerating MEMCACHED at AFDS 2012
Interestingly, the discrete GPU is the fastest at processing the data, but in the end is the slowest because it spends the majority of its execution time moving the data to and from the GPU and CPU memory areas. While the individual hardware is available to accelerate workloads in programs that use both CPU and GPU for processing, a great deal of execution time is spent moving data from the memory the CPU uses to the GPU memory (especially for discrete GPUs).
Trinity improves upon this by having the GPU on the same die as the CPU and providing a fast bus with direct access to system memory (the same system memory the CPU uses, though not necessarily the same address spaces). Kaveri will further improve upon this by giving both types of processors fast access to the same (single) set of data in memory. Cutting out the most time-intensive task will let programs like Memcached hit performance potentials and run as fast as the hardware will allow. In that way, unified and shared memory is a good thing, and will open up avenues to performance gains beyond what can be achieved by Moore’s law and additional CPU cores can alone. Allowing the GPU and CPU to simultaneously work from the same data set opens a lot of interesting doors for programmers to speed up workloads and manipulate data.

AMD Trinity APU die shot. Piledriver modules and caches are on the left.
AMD Trinity APU die shot. Piledriver modules and caches are on the left.

While AMD and the newly-formed HSA Foundation (currently AMD, ARM, Imagination Technology, MediaTek, and Texas Instruments) are pushing heterogeneous computing the hardest, it is technology that will be beneficial to everyone. The industry is definitely moving towards a more blended processing environment, something that began with the rise of specialty GPGPU workstation programs and is now starting to integrate itself with consumer applications. Standards like C++ AMP, OpenCL, and Nvidia’s CUDA languages harness the graphics cards for certain tasks. More and more programs are using the GPU for certain tasks (even if it’s just drawing and managing the UI), and as developers jump on board it should accelerate even more towards using components to their fullest on the software side. On the hardware side of things, we are already seeing integration of GPUs into the CPU die and specialty application processors (at least in mobile SoCs). Such varied configurations are becoming common and are continuing to evolve in a combined architecture direction.The mobile industry is a good example of HSA catching on with new system-on-a-chip processors coming out continuously and mobile operating systems that harness GPU horsepower to assist the ARM CPU cores. AMD isn’t just looking at low power devices, however — it’s pushing for “one (HSA) chip to rule them all” solutions that combine GPU cores with CPU cores (and even ARM cores!) that process what they are best at and give the best user experiences.
The overall transition of hardware and software that fully takes advantage of both processing types is still a ways off but we are getting closer everyday. Heterogeneous computing is the future, and assuming most software developers can be made to recognize the benefits and program to take advantage of the new chips, I’m all for it. When additional CPU cores and smaller process nodes stop making the cut, heterogeneous computing is where the industry will look for performance gains.




New Kernel Vulnerabilities Affect Ubuntu 12.04 LTS


Canonical announced a few hours ago, June 13th, in a security notice, that a new Linux kernel update for its Ubuntu 12.04 LTS (Precise Pangolin) operating system is now available, fixing six security vulnerabilities discovered in the Linux kernel packages by various developers.

These are the six kernel vulnerabilities found in the kernel packages for Ubuntu 12.04 LTS: CVE-2012-2121, CVE-2012-2133, CVE-2012-2313, CVE-2012-2319, CVE-2012-2383, and CVE-2012-2384.

As usual, you can click on each one to see how it affects your system, or go here for in-depth descriptions, as it affects other Linux operating systems as well.

The security flaws can be fixed if you upgrade your system(s) to the linux-image-3.2.0-25 (3.2.0-25.40) package(s). To apply the update, run the Update Manager application.

Don't forget to reboot your computer after the upgrade!

ATTENTION: Due to an unavoidable ABI change, the kernel packages have a new version number, which will force you to reinstall and recompile all third-party kernel modules you might have installed. Moreover, if you use the linux-restricted-modules package, you have to update it as well to get modules which work with the new Linux kernel version.