当前位置:实例文章 » 其他实例» [文章]【hadoop】centos7.6+hadoop3.1.1搭建分布式hadoop环境——包含各类问题解决方案

【hadoop】centos7.6+hadoop3.1.1搭建分布式hadoop环境——包含各类问题解决方案

发布人:shili8 发布时间:2025-01-10 06:43 阅读次数:0

**Hadoop环境搭建指南**

本文将指导您如何在CentOS7.6上安装并配置Hadoop3.1.1,建立一个分布式Hadoop环境。

### 一、准备工作####1. 安装JDK首先,我们需要安装Java Development Kit(JDK)。可以使用以下命令安装:

bashsudo yum install -y java-1.8.0-openjdk-devel


####2. 下载Hadoop下载Hadoop3.1.1的压缩包:

bashwget  />
### 二、配置环境变量####1. 配置JDK环境变量编辑`~/.bashrc`文件,添加以下内容:

bashexport JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64export PATH=$PATH:$JAVA_HOME/bin

####2. 配置Hadoop环境变量编辑`/etc/hosts`文件,添加以下内容(如果没有,则新建一个):

bash127.0.0.1 hadoop-master127.0.0.1 hadoop-slave1127.0.0.1 hadoop-slave2

编辑`~/.bashrc`文件,添加以下内容:

bashexport HADOOP_HOME=/usr/local/hadoop-3.1.1export PATH=$PATH:$HADOOP_HOME/bin

### 三、配置Hadoop####1. 配置core-site.xml编辑`$HADOOP_HOME/etc/hadoop/core-site.xml`文件,添加以下内容:

xml





fs.defaultFS
hdfs://hadoop-master:9000



####2. 配置hdfs-site.xml编辑`$HADOOP_HOME/etc/hadoop/hdfs-site.xml`文件,添加以下内容:

xml





dfs.replication
3



####3. 配置mapred-site.xml编辑`$HADOOP_HOME/etc/hadoop/mapred-site.xml`文件,添加以下内容:

xml





mapreduce.framework.name
yarn



### 四、启动Hadoop####1. 启动NameNode和DataNode
bashstart-dfs.sh

####2. 启动ResourceManager和NodeManager
bashstart-yarn.sh

####3. 检查Hadoop状态
bashjps -l

### 五、测试Hadoop####1. 创建一个WordCount程序
javaimport org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "wordcount");
job.setJarByClass(WordCount.class);
job.setMapperClass(Map.class);
job.setCombinerClass(Reduce.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("/tmp/input"));
FileOutputFormat.setOutputPath(job, new Path("/tmp/output"));
System.exit(job.waitForCompletion(true) ?0 :1);
}

public static class Map extends Mapper {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

@Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String[] words = value.toString().split("s+");
for (String w : words) {
word.set(w);
context.write(word, one);
}
}
}

public static class Reduce extends Reducer {
@Override public void reduce(Text key, Iterable values, Context context)
throws IOException, InterruptedException {
int sum =0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
}

####2. 编译WordCount程序
bashjavac -classpath /usr/local/hadoop-3.1.1/share/hadoop/common/hadoop-common-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.1.jar WordCount.java

####3. 提交WordCount程序
bashhadoop jar /tmp/WordCount.jar WordCount /tmp/input /tmp/output

####4. 查看WordCount结果
bashhadoop fs -cat /tmp/output/part-00000

本文介绍了如何在CentOS7.6上安装并配置Hadoop3.1.1,建立一个分布式Hadoop环境。包括准备工作、配置环境变量、配置Hadoop、启动Hadoop和测试Hadoop等内容。

相关标签:hadoop分布式大数据
其他信息

其他资源

Top