source: gs3-extensions/maori-lang-detection/hdfs-instructions/conf/spark-defaults.conf.in@ 33524

Last change on this file since 33524 was 33524, checked in by ak19, 5 years ago
  1. Further adjustments to documenting what we did to get things to run on the hadoop filesystem. 2. All the hadoop related gitprojects (with patches), separate copy of patches, config modifications and missing jar files that we needed, scripts we created to run on the hdfs machine and its host machine.
File size: 1.3 KB
Line 
1# Default system properties included when running spark-submit.
2# This is useful for setting default environmental settings.
3
4# Example:
5# spark.master spark://master:7077
6# spark.eventLog.enabled true
7# spark.eventLog.dir hdfs://namenode:8021/directory
8# spark.serializer org.apache.spark.serializer.KryoSerializer
9# spark.driver.memory 5g
10# spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
11
12#spark.eventLog.dir=hdfs://10.211.55.101/user/spark/applicationHistory
13#spark.eventLog.dir hdfs://node1:8021/user/spark/applicationHistory
14spark.eventLog.dir hdfs://node1/user/spark/applicationHistory
15#spark.eventLog.dir hdfs:///user/spark/applicationHistory
16spark.yarn.historyServer.address=10.211.55.101:18080
17spark.eventLog.enabled=true
18spark.yarn.archive hdfs://node1/user/spark/spark-libs.jar
19
20
21## -- START GS TEAM INSERT --- ##
22# Config settings and authentication details
23# for the created IAM role for s3a data that uses the created commoncrawl policy
24# Edit the access and secret keys below:
25spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem
26spark.hadoop.fs.s3a.access.key TYPE_AWS_IAM_ROLE_ACCESSKEY_HERE
27spark.hadoop.fs.s3a.secret.key TYPE_AWS_IAM_ROLE_SECRETKEY_HERE
28## -- END GS TEAM INSERT --- ##
Note: See TracBrowser for help on using the repository browser.