Archive for September 24th, 2010

MySQL :: White Papers – MySQL Cluster

MySQL :: White Papers – MySQL Cluster.

MySQL

  • Scaling Web Services with MySQL Cluster: An Alternative to the MySQL Memory Storage Engine While effective for smaller, “read-mostly” deployments, as web services evolve users of the MySQL MEMORY storage engine can find themselves confronting issues in scalability, concurrency and availability.
    MEMORY is a basic MySQL storage engine designed for in-memory operations. MySQL Cluster, which itself can be implemented as a MySQL storage engine, can perform all the same in-memory operations, and is faster, more reliable and uses less RAM for data, even on a single node.
    In performance testing, MySQL Cluster was able to deliver 30x higher throughput with 1/3rd the latency of the MEMORY storage engine on just a single node.
    MySQL Cluster can be configured and run in the same way as MEMORY (ie on a single host with no replication and no persistence), but then any of these high availability and scalability attributes can be added in any combination as the workload evolves. The database can be scaled across multiple nodes without implementing sharding (partitioning) in the application, significantly reducing both cost and complexity.
    In this whitepaper, comparisons between the MEMORY storage engine and MySQL Cluster are presented, including a performance study of the two technologies, before then providing a step-by-step guide on how existing Memory storage engine users can upgrade to MySQL Cluster.
    Read more »
  • Using MySQL Cluster 7.1 for Web and eCommerce Applications As your on-line services expand, so too can the demands on your web infrastructure. Challenges include:
    • Growing revenue streams and customer loyalty from your web and eCommerce applications
    • The need for continuous application availability and real time responsiveness to ensure a quality customer experience
    • Constant pace of innovation to quickly deliver compelling new services to market

    MySQL Cluster is a proven key component of web infrastructure that can help you cost-effectively deploy online applications to generate new revenue streams and build vibrant user communities.
    Read the white paper to learn how deploying MySQL Cluster with your web and eCommerce services enables you to grow revenue and enhance customer loyalty.

    Read more »

  • Building an Open Source, Carrier Grade Platform for Data Management with MySQL Cluster Whether Service Providers are looking to deploy new Web/Telco 2.0 applications to mobile internet users or consolidating subscriber data within the network to support greater service personalization and targeted advertising, the database plays a key enabling role.
    With the rapid shift from closed, expensive and proprietary technology, MySQL has grown to become the world’s most popular open source database. In this paper we explore how an open source carrier grade platform architecture is able to cost-effectively meet the communication industry’s high availability, scalability and real-time performance requirements.
    Read more »
  • MySQL Cluster 7.1: Architecture and New Features Whitepaper MySQL Cluster has been widely adopted for a range of telecommunications, web and enterprise workloads demanding carrier-grade availability with high transaction throughput and low latency. In this paper we will explore new features introduced in MySQL Cluster 7.1 to meet an ever expanding and more demanding set of mission-critical data management requirements.Read more »
  • MySQL Cluster Evaluation Guide – Designing, Evaluating and Benchmarking MySQL Cluster In this whitepaper learn the fundamentals of how to design and select the proper components for a successful MySQL Cluster evaluation. We explore hardware, networking and software requirements, and work through basic functional testing and evaluation best-practices.Read more »
  • Guide to Optimizing Performance of the MySQL Cluster Database This guide explores how to tune and optimize the MySQL Cluster database to handle diverse workload requirements. It discusses data access patterns and how to build distribution awareness into applications, before exploring schema and query optimization, tuning of parameters and how to get the best out of the latest innovations in hardware design.
    The Guide concludes with recent performance benchmarks conducted with the MySQL Cluster database, an overview of how MySQL Cluster can be integrated with other MySQL storage engines, before summarizing additional resources that will enable you to optimize MySQL Cluster performance with your applications.
    Read more »

Rubinius : Use Ruby

via Rubinius : Use Ruby™.

Technically, what is Rubinius?

Rubinius is an implementation of the Ruby programming language.

The Rubinius bytecode virtual machine is written in C++, incorporating LLVM to compile bytecode to machine code at runtime. The bytecode compiler and vast majority of the core classes are written in pure Ruby.

To interact with the rest of the system, the VM provides primitives which can be attached to methods and invoked. Additionally, FFI provides a direct call path to most C functions.

Rubinius uses a precise, compacting, generational garbage collector. It includes a compatible C-API for C extensions written for the standard Ruby interpreter (often referred to as MRI—Matz’s Ruby Implementation).

How compatible is Rubinius?

From the start, compatibility has been critical to us. To that end, we created the RubySpec to ensure that we maintained parity with official Ruby. We are currently at a 93% RubySpec pass rate and growing everyday.

For now Rubinius is targeting MRI 1.8.7 (1.9 is on the post 1.0 list). Most Gems, Rails plugins and C-Extensions work right out of the box. If you find a bug, let us know and we’ll get on top of it.

opscode’s chef development environment

DEVELOPMENT:

Before working on the code, if you plan to contribute your changes, you need to read the Opscode Contributing document.

You will also need to set up the repository with the appropriate branches. We document the process on the Chef Wiki.

Once your repository is set up, you can start working on the code. We do use BDD/TDD with RSpec and Cucumber, so you’ll need to get a development environment running.

ENVIRONMENT:

In order to have a development environment where changes to the Chef code can be tested, we’ll need to install a few things after setting up the Git repository.

Requirements:

Install these via your platform’s preferred method; for example apt, yum, ports, emerge, etc.

  • Git
  • CouchDB
  • libxml2 development package (for webrat)
  • libxslt develoment package (for webrat)

Install the following RubyGems.

  • ohai
  • rake
  • rspec
  • cucumber
  • webrat
  • merb-core
  • roman-merb_cucumber

Ohai is also by Opscode and available on GitHub, github.com/opscode/ohai/tree/master.

roman-merb_cucumber is available from GitHub:

  gem install --source http://gems.github.com/ roman-merb_cucumber

Starting the Environment:

Once everything is installed, run the dev:features rake task. Since the features do integration testing, root access is required.

  sudo rake dev:features

The dev:features task:

  • Installs chef, chef-server, chef-server-slice gems. It will fail if required gems above are missing.
  • Starts chef-server on ports 4000 and 4001.
  • Starts chef-indexer.
  • Starts CouchDB on port 5984.
  • Starts the stompserver on port 61613.

You’ll know its running when you see:

   ~ Activating slice 'ChefServerSlice' ...
  merb : worker (port 4000) ~ Starting Mongrel at port 4000
  merb : worker (port 4000) ~ Successfully bound to port 4000
  merb : worker (port 4001) ~ Starting Mongrel at port 4001
  merb : worker (port 4001) ~ Successfully bound to port 4001

You’ll want to leave this terminal running the dev environment.

Web Interface:

With the dev environment running, you can now access the web interface via localhost:4000/. Supply an OpenID to log in.

Spec testing:

We use RSpec for unit/spec tests.

  rake spec

This doesn’t actually use the development environment, because it does the testing on all the Chef internals. For integration/usage testing, we use Cucumber features.

Integration testing:

We test integration with Cucumber. The available feature tests are rake tasks:

  rake features                            # Run Features with Cucumber
  rake features:api                        # Run Features with Cucumber
  rake features:api:nodes                  # Run Features with Cucumber
  rake features:api:nodes:create           # Run Features with Cucumber
  rake features:api:nodes:delete           # Run Features with Cucumber
  rake features:api:nodes:list             # Run Features with Cucumber
  rake features:api:nodes:show             # Run Features with Cucumber
  rake features:api:nodes:update           # Run Features with Cucumber
  rake features:api:roles                  # Run Features with Cucumber
  rake features:api:roles:create           # Run Features with Cucumber
  rake features:api:roles:delete           # Run Features with Cucumber
  rake features:api:roles:list             # Run Features with Cucumber
  rake features:api:roles:show             # Run Features with Cucumber
  rake features:api:roles:update           # Run Features with Cucumber
  rake features:client                     # Run Features with Cucumber
  rake features:language                   # Run Features with Cucumber
  rake features:language:recipe_include    # Run Features with Cucumber
  rake features:provider:package:macports  # Run Features with Cucumber
  rake features:provider:remote_file       # Run Features with Cucumber
  rake features:search                     # Run Features with Cucumber