Eclipse hadoop plugin

Eclipse hadoop plugin DEFAULT

Eclipse plugin for Hadoop 2.6.0

Integrating Hadoop-2.6.0 with eclipse

  1. User “hdfs” is created where all Hadoop processes are running.
  2. Hadoop is installed to the directory “/opt/hadoop“.
  3. Eclipse is installed to the directory “/opt/eclipse“.

Step 1: Download the hadoop-eclipse-plugin 2.6.0 jar

Step 2: Copy the Map-Reduce plugin for eclipse in the the plugins directory of your eclipse folder – sudo cp /home/hdfs/Downloads/hadoop-eclipse-plugin-2.6.0.jar /opt/eclipse/plugins/ Restart the eclipse using the command – /opt/eclipse/eclipse -vm /usr/local/jdk1.8.0_05/bin/java -vmargs -Xmx1024m If elcipse is not coming up because of the X11 forwarding issue, try using “sux” instead of “su” while switching to the “hdfs“. Step 3: Start the eclipse  1. $ECLIPSE_HOME/eclipse

step 4: In Eclipse menu click,  Window --> Open Perspective --> Others -->  MapReduce

step 5: In bottom MapReduce icon click to Add new Hadoop location

step 6: Enter MapReduce & HDFS running port For recall, MapReduce port (9001) specified in $HADOOP_HOME/conf/mapred-site.xml  For recall, HDFS port (9000) specified in $HADOOP_HOME/conf/core-site.xml Enter the Hadoop user name

step 7 : Once Hadoop location added, DFS Locations will be seen/displayed in Eclipse Project Explorer window, (Windows-->Show View-->Project Explorer)

step 8: Once Hadoop added, DFS Locations will be seen/displayed in Project Explorer window,

step 9: Right click DFS location and click to Connect

step 10 : Once connected successfully, it will display all the DFS Folder.

Step 11: You can create Directory, Upload files to HDFS location, Download files to local by right click any of the listed Directory.

Sours: https://stackoverflow.com/q/28494727

hadoop2x-eclipse-plugin

eclipse plugin for hadoop 2.x.x

How to build

[[email protected] hadoop2x-eclipse-plugin]$ cd src/contrib/eclipse-plugin

[[email protected] eclipse-plugin]$ ant jar -Dversion=2.4.1 -Dhadoop.version=2.4.1 -Declipse.home=/opt/eclipse -Dhadoop.home=/usr/share/hadoop

final jar will be generated at directory

${hadoop2x-eclipse-plugin}/build/contrib/eclipse-plugin/hadoop-eclipse-plugin-2.4.1.jar

release version included

release/hadoop-eclipse-kepler-plugin-2.4.1.jar # not tested yet

release/hadoop-eclipse-kepler-plugin-2.2.0.jar

options required

version: plugin version

hadoop.version: hadoop version you want to compiled with

eclipse.home: path of eclipse home

hadoop.home: path of hadoop 2.x home

How to debug

start eclipse with debug parameter:

Note: compile issues resolve:

  1. For different hadoop, adjust ${hadoop2x-eclipse-plugin-master}/ivy/libraries.properties, to match hadoop dependency lib version.
  2. modify ${hadoop2x-eclipse-plugin}/src/contrib/eclipse-plugin/build.xml, in the node: <attribute name="Bundle-ClassPath" .... to add the jar needed.
Sours: https://github.com/winghc/hadoop2x-eclipse-plugin
  1. Espn dynasty rankings
  2. Confession game questions
  3. Minecraft function generator
  4. Oreilley auto parts
  5. Path of titans

Build, Install and Configure Eclipse Plugin for Apache Hadoop 2.2.0

Build, Install and Configure Eclipse Plugin for Apache Hadoop 2.2.0

Apache Hadoop Development Tools (HDT) is still in development phase. So, no official distribution of Hadoop 2.2.0 Eclipse Plugin is available now. But we can build the same using winghc/hadoop2x-eclipse-plugin. In this post, we'll build, install and configure the plugin with the Eclipse or any Eclipse based IDE (say, Spring Tool Suite) to ease the development activities using Hadoop framework.

The Hadoop Development Tools (HDT) is a set of plugins for the Eclipse IDE for developing against the Hadoop platform.

Currently we are in the process of porting the existing MapReduce tools from the Apache Hadoop project to allow working with multiple versions of Hadoop from within one IDE.

Come get involved as we move towards our first release!

This project is currently a member of the Apache Incubator, so check back for updates, or come join us [email protected].

Tools and Technologies used in this article

  1. Apache Hadoop 2.2.0
  2. Spring Tool Suite 3.5.0
  3. Windows 7 OS
  4. JDK 1.6
  5. Apache Ant

1. Download

Download winghc/hadoop2x-eclipse-plugin zip.

Extract to a local directory (say, 'C:\hadoop2x-eclipse-plugin').

3. Build

  1. Open ' in the Command prompt.
  2. Run ANT buildeclipse.home: Installation directory of Eclipse IDE.
    hadoop.home: Hadoop installation directory.

    Note: Internet connection is required as 'ivy-2.1.0.jar' will be downloaded.

4. Install

On successful build, 'hadoop-eclipse-plugin-2.2.0.jar' will be generated inside '. Copy this jar and paste it to the 'plugins' directory of your IDE.

5. Configure

Restart the Eclipse IDE if already started. Otherwise start it.

  1. Goto Window --> Open Perspective --> Other and select 'Map/Reduce' perspective.
  2. Click 'New Hadoop location...' (Blue Elephant icon) and define Hadoop location to run MapReduce applications. Click 'Finish' button.
    Define Hadoop location

    Map/Reduce(V2) Master: Address of the Map/Reduce master node (The Job Tracker).
    DFS Master: Address of the Distributed FileSystem Master node (The Name Node).

    To know the 'Port' numbers, start Hadoop and open http://localhost:8088/cluster in a browser. Click Tools --> Configuration and search for the following properties.

    DFS MasterMap/Reduce(V2) MasterHadoop Configuration
  3. Now we can browse the Hadoop file system and perform different files / folder operations using the GUI only.
    DFS locations

    Also, we can easily create Map/Reduce Project, Mapper, Reducer and MapReduce Driver using the wizard (File --> New --> Other... --> Map/Reduce) and jump into Hadoop programming.
    Map/Reduce Project, Mapper, Reducer and MapReduce Driver

References

, Apache Hadoop, Hadoop Eclipse Plugin, Eclipse Plugin, Hadoop Development Tools, classic

About the Author

You might also like

NRT (Near Real Time) Indexing using Cloudera Search And Lily HBase Indexer

Hadoop Hands on - A POC Covering HDFS API, MapReduce, JSON and AVRO SerDe, HBase API With FuzzyRowFilter usage

Shell$ExitCodeException - Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/service/CompositeService

Run Hadoop Wordcount MapReduce Example on Windows

Maven Build Failure - Hadoop 2.2.0 - [ERROR] class file for org.mortbay.component.AbstractLifeCycle not found

ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path

Other Posts

Shell$ExitCodeException - Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/service/CompositeService

Next Story

Cucumber Quick Start Guide

Previous Story
Sours: https://www.srccodes.com/build-install-configure-eclipse-plugin-apache-hadoop/

Eclipse is a powerful IDE for java development. Since Hadoop and Mapreduce programming is done in java, it would be better to do our programming in a well-featured Integrated Development Environment (IDE). So, In this post, we are going to learn how to install eclipse on Ubuntu machine and configure it for Hadoop and Mapreduce programming. Let’s start with downloading and installing Eclipse on ubuntu machine.

Table of Contents

1. Install Eclipse:

  1. Download latest version of Eclipse IDE for java EE developers from Eclipse downloads page http://www.eclipse.org/downloads/.In this post, we have described installation of Eclipse Kepler which is latest version at the time of writing this post.
  2. Extract the *.tar.gz file into your preferred location of installation directory. Usually into /opt/eclipse.
  3. Set up environment variable ECLIPSE_HOME in .bashrc file with installation directory and add the installation directory into existing of directories in PATH environment variable.

Below are the useful terminal commands to perform above actions in the same sequence. Eclipse Install Add the below two entries into .bashrc file.

 

export ECLIPSE_HOME="/opt/eclipse"

export PATH="$ECLIPSE_HOME:$PATH"

Now we can start eclipse from terminal with $ eclipse command.

2. Eclipse Configuration for Hadoop/Mapreduce:

Eclipse configuration for Hadoop can be done in two methods. One by creating eclipse plugin for the currently using hadoop version and copying it into eclipse plugins folder. And another way by installing Maven plugin for integration of eclipse with hadoop and performing necessary setup.

Creation of Hadoop Eclipse Plugin:

For creation of customized hadoop eclipse plugin for hadoop version currently being used. In this post, we have created plugin for hadoop-2.3.0 release. 

Prerequisites:

    1. ant – We need ant building tool to be installed on our machine to create plugin jar file. To install ant on Ubuntu machine use the below command.

 

$sudo apt-getinstall ant

2.   git – git needs to be installed on our machine to clone the source code required to build the jar file from github. git can be installed with below command.

 

$sudo apt-getinstall git

Plugin creation:

    1. Download the the required source code from git hub into our preferred location.
    2. The following path has some customized source files to create plugin for hadoop-2.3.0 release which is the latest version at the time of writing this post. https://github.com/siva535/hadoop-eclipse-plugin-2.3.0/releases/download/1.0/hadoop-eclipse-plugin.zip
    3. Extract the source files from the above zip file and change directory into $ cd Downloads/hadoop-eclipse-plugin/src/contrib/eclipse-plugin.
    4. Compile the source code and build jar file with the below command.

 

$ant jar-Dversion=2.3.0-Declipse.home=/opt/eclipse/-Dhadoop.home=/usr/lib/hadoop/hadoop-2.3.0/

ant jar

ant jar2

Note:

Here in the above ant jar command, -Dversion=2.3.0 property is provided to specify the version number of hadoop release. It is specific to hadoop-2.3.0 release. The same source files can be used for other releases as well by changing the version number in this parameter and providing appropriate hadoop’s home directory.

In this example, hadoop’s home directory is mentioned with

-Dhadoop.home=/usr/lib/hadoop/hadoop-2.3.0/ property. This can be changed as per your hadoop installation directory.

Also we have changed libraries.properties file in hadoop-eclipse-plugin/ivy/ directory to avoid the version mismatch errors.(required version files are not present in hadoop home directory).

For building eclipse-plugin for hadoop-2.3.0 release, the above source code and commands work pretty well. No changes are needed for hadoop-2.3.0. Changes will be needed accordingly only if we needed to generate plugin for other versions.

5.   Now copy this plugin jar file from hadoop-eclipse-plugin/build/contrib/eclipse-plugin/hadoop-eclipse-plugin-2.3.0.jar to /opt/eclipse/plugins directory.

6.   After restart of Eclipse, the Map/Reduce perspective will be available.

Maven plugin for Integration of Eclipse with Hadoop:

Prerequisites:

  1. For this option, maven needs to be installed on our machine and this can be done with the below command if it is not installed already.

 

$sudo apt-getinstall maven2

Setup:

  1. We need to setup classpath variables for ant and maven installations. So, start eclipse and go to Window –> Preferences. Open Java –> Build Path –> Classpath Variables. Add entries for ANT_HOME as /usr/share/ant (our ant installation path) and M2_REPO with maven installation directory.

classpath variables

2.  Install m2e plugin by navigating through Help –> Install New Software. As shown in below screen, enter http://download.eclipse.org/technology/m2e/releases into “Work with” box and select the plugin and click next button and complete the installation.

eclipse m2e

3.  For configuration of hadoop, eclipse need external jar’s from JAVA_HOME/lib directory. Here JAVA_HOME is our java installation directory. From this JAVA_HOME/lib, we need to add tools.jar file as external jar file.

    1.  Go to Window –> Preferences –> Java –>  Installed JREs.
    2. Select default JRE and Edit –> Add External JARs and select tools.jar file from JAVA_HOME/lib directory.

jre edit

javahome.lib

tools.jar

4.  Download hadoop source code from svn or git. Using git latest version of hadoop can be downloaded with below command. 

 

$git clonegit://git.apache.org/hadoop-common.git

5.  Change directory (cd) to hadoop-common folder and submit below command from terminal to build maven hadoop project.

 

[email protected]:~/hadoop-common$mvn generate-sources generate-test-sources

mvn build

6.  Import the above project into Eclipse:

    1. Go to File -> Import.
    2. Select Maven -> Existing Maven Projects.
    3. Navigate to the top directory of the downloaded source. Here hadoop-common directory in this example.

hadoop projects

7.  The generated sources as above may show up some errors due to the java files that are generated from protoc. To fix them, right click on each project –> Build Path –> Configure Build Path.

configure build path2

link sources from  target/generated-sources and target/generated-test-sources. For inclusion pattern, select “**/*.java”.

link sources

Conclusion:

As discussed above, choosing the option 1 for creation of hadoop eclipse plugin will be easier than resolving the errors in option 2. So, we preferred using option 1 to create hadoop-eclipse-plugin-2.3.0 and copied into /opt/eclipse/plugins folder.

For Example Mapreduce program WordCount development under Eclipse IDE please refer the next post –> Sample Mapreduce Program In Eclipse.

Sours: http://hadooptutorial.info/eclipse-configuration-for-hadoop/

Hadoop plugin eclipse

Hadoop-eclipse-plugin-2.8.5

项目介绍

在Windows下,编译时,需要安装 Ant,eclipse

编译之前首先需要安装 Ant 及 配置环境。

1.1. 安装Ant

  1. 首先下载 Ant,例如下载版本:,网上搜索。
  2. 配置环境
ANT_HOME=D:\Program Files (x86)\apache-ant-1.10.5 Path=Path;%ANT_HOME%\bin;
  1. 验证安装、配置是否成功

1.2. 下载安装Hadoop-2.8.5.tar.gz

解压 Hadoop-2.8.5.tar.gz,路径中不要包含空格。解压到 D:\hadoop-2.8.5

1.3. 下载安装 Eclipse

本文安装 eclipse-standard-luna-SR2-win32-x86_64.zip。 安装路径如:D:\Eclipse\EclipseLuna

1.4. 下载解压 hadoop2x-eclipse-plugin-master.zip

自行搜索 hadoop2x-eclipse-plugin,下载到本地,解压到:D:\Eclipse 路径下。

修改文件 build.xml

修改 ~\hadoop2x-eclipse-plugin-master\src\contrib\eclipse-plugin\build.xml 文件如下:

<?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.--> <projectdefault="jar"name="eclipse-plugin"> <importfile="../build-contrib.xml"/> <pathid="eclipse-sdk-jars"> <filesetdir="${eclipse.home}/plugins/"> <includename="org.eclipse.ui*.jar"/> <includename="org.eclipse.jdt*.jar"/> <includename="org.eclipse.core*.jar"/> <includename="org.eclipse.equinox*.jar"/> <includename="org.eclipse.debug*.jar"/> <includename="org.eclipse.osgi*.jar"/> <includename="org.eclipse.swt*.jar"/> <includename="org.eclipse.jface*.jar"/> <includename="org.eclipse.team.cvs.ssh2*.jar"/> <includename="com.jcraft.jsch*.jar"/> </fileset> </path> <pathid="hadoop-sdk-jars"> <filesetdir="${hadoop.home}/share/hadoop/mapreduce"> <includename="hadoop*.jar"/> </fileset> <filesetdir="${hadoop.home}/share/hadoop/hdfs"> <includename="hadoop*.jar"/> </fileset> <filesetdir="${hadoop.home}/share/hadoop/common"> <includename="hadoop*.jar"/> </fileset> </path> <!-- Override classpath to include Eclipse SDK jars --> <pathid="classpath"> <pathelementlocation="${build.classes}"/> <!--pathelement location="${hadoop.root}/build/classes"/--> <pathrefid="eclipse-sdk-jars"/> <pathrefid="hadoop-sdk-jars"/> </path> <!-- Skip building if eclipse.home is unset. --> <targetname="check-contrib"unless="eclipse.home"> <propertyname="skip.contrib"value="yes"/> <echomessage="eclipse.home unset: skipping eclipse plugin"/> </target> <!-- target name="compile" depends="init, ivy-retrieve-common" unless="skip.contrib" --> <targetname="compile"depends="init"unless="skip.contrib"> <echomessage="contrib: ${name}"/> <javacencoding="${build.encoding}"srcdir="${src.dir}"includes="**/*.java"destdir="${build.classes}"includeAntRuntime="false"debug="${javac.debug}"deprecation="${javac.deprecation}"> <classpathrefid="classpath"/> </javac> </target> <!-- Override jar target to specify manifest --> <targetname="jar"depends="compile"unless="skip.contrib"> <mkdirdir="${build.dir}/lib"/> <copytodir="${build.dir}/lib/"verbose="true"> <filesetdir="${hadoop.home}/share/hadoop/mapreduce"> <includename="hadoop*.jar"/> </fileset> </copy> <copytodir="${build.dir}/lib/"verbose="true"> <filesetdir="${hadoop.home}/share/hadoop/common"> <includename="hadoop*.jar"/> </fileset> </copy> <copytodir="${build.dir}/lib/"verbose="true"> <filesetdir="${hadoop.home}/share/hadoop/hdfs"> <includename="hadoop*.jar"/> </fileset> </copy> <copytodir="${build.dir}/lib/"verbose="true"> <filesetdir="${hadoop.home}/share/hadoop/yarn"> <includename="hadoop*.jar"/> </fileset> </copy> <copytodir="${build.dir}/classes"verbose="true"> <filesetdir="${root}/src/java"> <includename="*.xml"/> </fileset> </copy> <copyfile="${hadoop.home}/share/hadoop/common/lib/protobuf-java-${protobuf.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/log4j-${log4j.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/commons-cli-${commons-cli.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/commons-configuration-${commons-configuration.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/commons-lang-${commons-lang.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/commons-collections-${commons-collections.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/jackson-core-asl-${jackson.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/jackson-mapper-asl-${jackson.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/httpclient-${httpclient.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/slf4j-log4j12-${slf4j-log4j12.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/slf4j-api-${slf4j-api.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/guava-${guava.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/hadoop-auth-${hadoop.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/netty-${netty.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/htrace-core4-${htrace.version}-incubating.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/woodstox-core-${woodstox.version}.jar"todir="${build.dir}/lib"verbose="true"/> <copyfile="${hadoop.home}/share/hadoop/common/lib/stax2-api-${stax2.version}.jar"todir="${build.dir}/lib"verbose="true"/> <jarjarfile="${build.dir}/hadoop-${name}-${hadoop.version}.jar"manifest="${root}/META-INF/MANIFEST.MF"> <manifest> <attributename="Bundle-ClassPath"value="classes/, lib/hadoop-hdfs-client-${hadoop.version}.jar, lib/hadoop-mapreduce-client-core-${hadoop.version}.jar, lib/hadoop-mapreduce-client-common-${hadoop.version}.jar, lib/hadoop-mapreduce-client-jobclient-${hadoop.version}.jar, lib/hadoop-auth-${hadoop.version}.jar, lib/hadoop-common-${hadoop.version}.jar, lib/hadoop-hdfs-${hadoop.version}.jar, lib/protobuf-java-${protobuf.version}.jar, lib/log4j-${log4j.version}.jar, lib/commons-cli-${commons-cli.version}.jar, lib/commons-configuration-${commons-configuration.version}.jar, lib/commons-lang-${commons-lang.version}.jar, lib/commons-collections-${commons-collections.version}.jar, lib/jackson-core-asl-${jackson.version}.jar, lib/jackson-mapper-asl-${jackson.version}.jar, lib/httpclient-${httpclient.version}.jar, lib/slf4j-log4j12-${slf4j-log4j12.version}.jar, lib/slf4j-api-${slf4j-api.version}.jar, lib/guava-${guava.version}.jar, lib/netty-${netty.version}.jar, lib/servlet-api-${servlet-api.version}.jar, lib/htrace-core4-${htrace.version}-incubating.jar, lib/commons-io-${commons-io.version}.jar"/> </manifest> <filesetdir="${build.dir}"includes="classes/ lib/"/> <!--fileset dir="${build.dir}" includes="*.xml"/--> <filesetdir="${root}"includes="resources/ plugin.xml"/> </jar> </target> </project>

修改文件 libraries.properties

修改 ~\hadoop2x-eclipse-plugin-master\ivy\libraries.properties 文件如下:

# Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.#This properties file lists the versions of the various artifacts used by hadoop and components.#It drives ivy and the generation of a maven POM# This is the version of hadoop we are generating hadoop.version=2.8.5 hadoop-gpl-compression.version=0.1.0 #These are the versions of our dependencies (in alphabetical order) apacheant.version=1.7.0 ant-task.version=2.0.10 asm.version=3.2 aspectj.version=1.6.5 aspectj.version=1.6.11 checkstyle.version=4.2 commons-cli.version=1.2 commons-codec.version=1.4 commons-collections.version=3.2.2 commons-configuration.version=1.6 commons-daemon.version=1.0.13 commons-httpclient.version=3.0.1 commons-lang.version=2.6 commons-logging.version=1.1.3 commons-logging-api.version=1.1.3 commons-math.version=3.1.1 commons-el.version=1.0 commons-fileupload.version=1.2 commons-io.version=2.4 commons-net.version=3.1 core.version=3.1.1 coreplugin.version=1.3.2 hsqldb.version=1.8.0.10 htrace.version=4.0.1 httpclient.version=4.5.2 ivy.version=2.1.0 jasper.version=5.5.12 jackson.version=1.9.13 #not able to figureout the version of jsp & jsp-api version to get it resolved throught ivy# but still declared here as we are going to have a local copy from the lib folder jsp.version=2.1 jsp-api.version=5.5.12 jsp-api-2.1.version=6.1.14 jsp-2.1.version=6.1.14 jets3t.version=0.6.1 jetty.version=6.1.26 jetty-util.version=6.1.26 jersey-core.version=1.9 jersey-json.version=1.9 jersey-server.version=1.9 junit.version=4.11 jdeb.version=0.8 jdiff.version=1.0.9 json.version=1.0 kfs.version=0.1 log4j.version=1.2.17 lucene-core.version=2.3.1 mockito-all.version=1.8.5 jsch.version=0.1.54 oro.version=2.0.8 rats-lib.version=0.5.1 servlet.version=4.0.6 servlet-api.version=2.5 slf4j-api.version=1.7.10 slf4j-log4j12.version=1.7.10 wagon-http.version=1.0-beta-2 woodstox.version=5.0.3 stax2.version=3.1.4 xmlenc.version=0.52 xerces.version=1.4.4 protobuf.version=2.5.0 guava.version=11.0.2 netty.version=3.6.2.Final

3.1. 执行命令

3.2. 编译成功

出现如下命令时表示编译成功

BUILD SUCCESSFUL Total time: 2 seconds

3.3. 复制编译文件

将 文件下的 到 。

到此,编译 结束。

Sours: https://github.com/DoubleBirdsU/Hadoop-eclipse-plugin
How to use eclipse to build hadoop plugin

Hadoop Eclipse Plug-in

  • JIRA MAPREDUCE-1262 has the latest status for this plugin. The JIRA contains a compiled Plugin JAR you can use for Hadoop 0.20.0 and 0.20.1*

JIRA MAPREDUCE-1280 contains a version of the plugin that works with hadoop 0.20.2 and eclipse 3.5/3.6

The Hadoop Eclipse Plug-in provides tools to ease the experience of Map/Reduce on Hadoop. Among other things, the plug-in provides support to:

  • create Mapper, Reducer, Driver classes;
  • browse and interact with distributed file systems;
  • submit jobs and monitor their execution.

Requirements

To ensure a safe behavior of the plug-in, you should consider the following facts and recommendations:

  • The plug-in has been tested on Eclipse 3.2+ only, using Java5+ compliant JVMs only.
  • To compile jobs on your development host, you need Hadoop jars. In most cases, you only need hadoop-X.Y-core.jar. Make sure you build your jobs with the same version of Hadoop as the version of Hadoop your execution environment currently runs.
  • Make sure you when you compile your jobs that the Java classes you generate are compatible with the JVM environment of your execution environment. A frequent issue is to have a Java5 execution environment that cannot executes your jobs because your jobs have been compiled with a Java6 compliance level.

More will come here soon.


Here is an overview of the Eclipse environment for Hadoop:

The environment is accessible through the "Map/Reduce perspective" (the blue elephant icon in the top-right side of the main window). To open this perspective, select the menu: Window, Open Perspective, Other, and finally Map/Reduce. This perspective is roughly a Java edition environment extended with:

  • a view named "Map/Reduce locations" which lists Hadoop locations (the view at the bottom of the main window),
  • a file browser for the distributed file systems associated to each Hadoop locations (on the left side).

Hadoop Map/Reduce locations

The location view allows the creation, edition and deletion of Map/Reduce locations.

To create a new location click on the "New Hadoop location..." button in the view toolbar or in the contextual menu.

A wizard pops up and asks for the location parameters.

You must at least fill the following entries:

  • the location name (avoid ponctuation marks),
  • the masters addresses: hostname or IP address and TCP port numbers for the Map/Reduce master (the JobTracker) and for the Distributed File System (the NameNode).

The Map/Reduce and the DFS masters are colocated by default (i.e. run on the same host).

A SOCKS proxy can be configured if you cannot access the Map/Reduce location directly because your machine is not directly connected to the location. See section "How to use SOCKS proxies" for more details.

Not implemented yet: user name, validation button, load from file button.

How to use SOCKS proxies

To set up a local proxy through an ssh server PROXY at port 21080, for example:

Note that when using a SOCKS proxy in a local client like the Eclipse plugin, you should ensure that your Hadoop cluster does not inherit your proxy settings, or the Hadoop daemons won't be able to communicate with each other. To override proxy settings in the Hadoop nodes, add the following property setting to each node's hadoop-site.xml:

<property> <name>hadoop.rpc.socket.factory.class.default</name> <value>org.apache.hadoop.net.StandardSocketFactory</value> <final>true</final> <description> Prevent proxy settings set up by clients in their job configs from affecting our connectivity. </description> </property>

The standard socket factory produces RPC sockets with direct connections (ie, without going through any proxies), and the "final" attribute prevents your job configuration from overriding this property's value.

How to build and install the plug-in

To build the Eclipse plug-in, you need the Hadoop source files and a working Eclipse environment (version 3.3+). When compiling Hadoop, the Eclipse plug-in will be built if it founds the Eclipse environment path in the ant property "eclipse.home". The build framework looks for this property in ${hadoop-src-root}/src/contrib/eclipse-plugin/build.properties and in $HOME/eclipse-plugin.build.properties.

A typical $HOME/eclipse-plugin.build.properties file would contain the following entry: eclipse.home=/path/to/eclipse

Then the plug-in should be built when compiling Hadoop: ant clean package (from the ${hadoop-src-root} directory), which will produce {hadoop-src-root}/build/contrib/eclipse-plugin/hadoop-${version}-eclipse-plugin.jar

To install the generated plug-in in your Eclipse environment, remove first all previous versions of the plug-in from your Eclipse environment and copy the hadoop-${version}-eclipse-plugin.jar file generated as described above in your ${eclipse.home}/plugins/ directory. When you restart Eclipse, the Map/Reduce perspective should be available.

Sours: https://cwiki.apache.org/confluence/display/HADOOP2/EclipsePlugIn

Now discussing:

hadoop eclipse plugin installation directory

you need to copy the "hadoop-eclipse-plugin-.jar" from your "/hadoop-/contrib/eclipse-plugin directory" into the "/eclipse/plugins" directory. After this you need to restart the eclipse. But this doesn't work a few times. A possible reason could be the absence of a few jar files in the hadoop-eclipse-plugin-.jar. Try this an d let me know it work for you - Extract the hadoop-eclipse-plugin-.jar, and add the following 5 files into "hadoop-eclipse-plugin-0.20.203.0\lib - After this add the names of these jars into the "haoop-eclipse-plugin-*\META-INF\MANIFEST.MF" file. Your MANIFEST.MF file should look something like this -

After doing this re'jar' the package and replace the old "hadoop-eclipse-plugin-*.jar" in the /eclipse/plugins directory and restart the eclipse.

answered Jun 27 '12 at 9:30

TariqTariq

33.2k88 gold badges5353 silver badges7878 bronze badges

Sours: https://stackoverflow.com/questions/11216109/hadoop-eclipse-plugin-installation-directory


1314 1315 1316 1317 1318